Sep 13 00:06:24.910097 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:06:24.911173 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:24.911202 kernel: BIOS-provided physical RAM map: Sep 13 00:06:24.911212 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:06:24.911221 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:06:24.911231 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 13 00:06:24.911242 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 13 00:06:24.911252 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:06:24.911263 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:06:24.911276 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:06:24.911287 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:06:24.911297 kernel: NX (Execute Disable) protection: active Sep 13 00:06:24.911306 kernel: APIC: Static calls initialized Sep 13 00:06:24.911316 kernel: efi: EFI v2.7 by EDK II Sep 13 00:06:24.911330 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 13 00:06:24.911345 kernel: SMBIOS 2.7 present. Sep 13 00:06:24.911358 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:06:24.911371 kernel: Hypervisor detected: KVM Sep 13 00:06:24.911382 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:06:24.911394 kernel: kvm-clock: using sched offset of 3805852846 cycles Sep 13 00:06:24.911406 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:06:24.911418 kernel: tsc: Detected 2499.998 MHz processor Sep 13 00:06:24.911431 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:06:24.911445 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:06:24.911457 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:06:24.911473 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:06:24.911487 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:06:24.911516 kernel: Using GB pages for direct mapping Sep 13 00:06:24.911529 kernel: Secure boot disabled Sep 13 00:06:24.911542 kernel: ACPI: Early table checksum verification disabled Sep 13 00:06:24.911556 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:06:24.911570 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:06:24.911584 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:06:24.911597 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:06:24.911614 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:06:24.911627 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:06:24.911641 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:06:24.911655 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:06:24.911668 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:06:24.911682 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:06:24.911702 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:06:24.911719 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:06:24.911733 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:06:24.911748 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:06:24.911763 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:06:24.911777 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:06:24.911791 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:06:24.911805 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:06:24.911821 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:06:24.911836 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:06:24.911850 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:06:24.911865 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:06:24.911879 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:06:24.911892 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:06:24.911907 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:06:24.911920 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:06:24.911934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:06:24.911952 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:06:24.911965 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:06:24.911980 kernel: Zone ranges: Sep 13 00:06:24.911994 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:06:24.912009 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:06:24.912023 kernel: Normal empty Sep 13 00:06:24.912038 kernel: Movable zone start for each node Sep 13 00:06:24.912052 kernel: Early memory node ranges Sep 13 00:06:24.912065 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:06:24.912079 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:06:24.912091 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:06:24.912103 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:06:24.912116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:06:24.912129 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:06:24.912157 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:06:24.912170 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:06:24.912183 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:06:24.912196 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:06:24.912212 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:06:24.912225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:06:24.912238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:06:24.912251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:06:24.912263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:06:24.912277 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:06:24.912290 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:06:24.912303 kernel: TSC deadline timer available Sep 13 00:06:24.912316 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:06:24.912329 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:06:24.912344 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:06:24.912357 kernel: Booting paravirtualized kernel on KVM Sep 13 00:06:24.912370 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:06:24.912383 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:06:24.912397 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:06:24.912409 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:06:24.912422 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:06:24.912434 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:06:24.912447 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:06:24.912465 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:24.912479 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:06:24.912491 kernel: random: crng init done Sep 13 00:06:24.912504 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:06:24.912517 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:06:24.912530 kernel: Fallback order for Node 0: 0 Sep 13 00:06:24.912543 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:06:24.912556 kernel: Policy zone: DMA32 Sep 13 00:06:24.912572 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:06:24.912585 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162936K reserved, 0K cma-reserved) Sep 13 00:06:24.912599 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:06:24.912612 kernel: Kernel/User page tables isolation: enabled Sep 13 00:06:24.912625 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:06:24.912638 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:06:24.912651 kernel: Dynamic Preempt: voluntary Sep 13 00:06:24.912664 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:06:24.912677 kernel: rcu: RCU event tracing is enabled. Sep 13 00:06:24.912694 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:06:24.912707 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:06:24.912720 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:06:24.912733 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:06:24.912746 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:06:24.912758 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:06:24.912772 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:06:24.912798 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:06:24.912812 kernel: Console: colour dummy device 80x25 Sep 13 00:06:24.912826 kernel: printk: console [tty0] enabled Sep 13 00:06:24.912840 kernel: printk: console [ttyS0] enabled Sep 13 00:06:24.912856 kernel: ACPI: Core revision 20230628 Sep 13 00:06:24.912870 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:06:24.912885 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:06:24.912899 kernel: x2apic enabled Sep 13 00:06:24.912913 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:06:24.912927 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:06:24.912944 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 13 00:06:24.912957 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:06:24.912971 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:06:24.912985 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:06:24.912999 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:06:24.913012 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:06:24.913026 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:06:24.913040 kernel: RETBleed: Vulnerable Sep 13 00:06:24.913053 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:06:24.913070 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:06:24.913083 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:06:24.913096 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:06:24.913110 kernel: active return thunk: its_return_thunk Sep 13 00:06:24.913124 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:06:24.915197 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:06:24.915218 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:06:24.915233 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:06:24.915249 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:06:24.915264 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:06:24.915280 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:06:24.915300 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:06:24.915316 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:06:24.915331 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:06:24.915346 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:06:24.915361 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:06:24.915376 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:06:24.915391 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:06:24.915407 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:06:24.915422 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:06:24.915437 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:06:24.915453 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:06:24.915468 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:06:24.915488 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:06:24.915503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:06:24.915518 kernel: landlock: Up and running. Sep 13 00:06:24.915533 kernel: SELinux: Initializing. Sep 13 00:06:24.915548 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:24.915564 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:24.915579 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:06:24.915594 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:24.915610 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:24.915626 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:24.915645 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:06:24.915661 kernel: signal: max sigframe size: 3632 Sep 13 00:06:24.915677 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:06:24.915693 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:06:24.915709 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:06:24.915724 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:06:24.915740 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:06:24.915755 kernel: .... node #0, CPUs: #1 Sep 13 00:06:24.915771 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:06:24.915789 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:06:24.915800 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:06:24.915812 kernel: smpboot: Max logical packages: 1 Sep 13 00:06:24.915825 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 13 00:06:24.915838 kernel: devtmpfs: initialized Sep 13 00:06:24.915850 kernel: x86/mm: Memory block size: 128MB Sep 13 00:06:24.915864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:06:24.915877 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:06:24.915890 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:06:24.915907 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:06:24.915919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:06:24.915934 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:06:24.915949 kernel: audit: type=2000 audit(1757721985.210:1): state=initialized audit_enabled=0 res=1 Sep 13 00:06:24.915963 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:06:24.915976 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:06:24.915988 kernel: cpuidle: using governor menu Sep 13 00:06:24.916001 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:06:24.916017 kernel: dca service started, version 1.12.1 Sep 13 00:06:24.916031 kernel: PCI: Using configuration type 1 for base access Sep 13 00:06:24.916045 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:06:24.916060 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:06:24.916076 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:06:24.916090 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:06:24.916105 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:06:24.916120 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:06:24.917176 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:06:24.917202 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:06:24.917219 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:06:24.917233 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:06:24.917248 kernel: ACPI: Interpreter enabled Sep 13 00:06:24.917262 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:06:24.917277 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:06:24.917291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:06:24.917306 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:06:24.917321 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:06:24.917336 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:06:24.917569 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:06:24.917711 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:06:24.917842 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:06:24.917860 kernel: acpiphp: Slot [3] registered Sep 13 00:06:24.917875 kernel: acpiphp: Slot [4] registered Sep 13 00:06:24.917889 kernel: acpiphp: Slot [5] registered Sep 13 00:06:24.917903 kernel: acpiphp: Slot [6] registered Sep 13 00:06:24.917924 kernel: acpiphp: Slot [7] registered Sep 13 00:06:24.917938 kernel: acpiphp: Slot [8] registered Sep 13 00:06:24.917954 kernel: acpiphp: Slot [9] registered Sep 13 00:06:24.917968 kernel: acpiphp: Slot [10] registered Sep 13 00:06:24.917982 kernel: acpiphp: Slot [11] registered Sep 13 00:06:24.917998 kernel: acpiphp: Slot [12] registered Sep 13 00:06:24.918013 kernel: acpiphp: Slot [13] registered Sep 13 00:06:24.918028 kernel: acpiphp: Slot [14] registered Sep 13 00:06:24.918043 kernel: acpiphp: Slot [15] registered Sep 13 00:06:24.918061 kernel: acpiphp: Slot [16] registered Sep 13 00:06:24.918076 kernel: acpiphp: Slot [17] registered Sep 13 00:06:24.918090 kernel: acpiphp: Slot [18] registered Sep 13 00:06:24.918105 kernel: acpiphp: Slot [19] registered Sep 13 00:06:24.918119 kernel: acpiphp: Slot [20] registered Sep 13 00:06:24.919174 kernel: acpiphp: Slot [21] registered Sep 13 00:06:24.919198 kernel: acpiphp: Slot [22] registered Sep 13 00:06:24.919215 kernel: acpiphp: Slot [23] registered Sep 13 00:06:24.919231 kernel: acpiphp: Slot [24] registered Sep 13 00:06:24.919248 kernel: acpiphp: Slot [25] registered Sep 13 00:06:24.919269 kernel: acpiphp: Slot [26] registered Sep 13 00:06:24.919286 kernel: acpiphp: Slot [27] registered Sep 13 00:06:24.919302 kernel: acpiphp: Slot [28] registered Sep 13 00:06:24.919318 kernel: acpiphp: Slot [29] registered Sep 13 00:06:24.919335 kernel: acpiphp: Slot [30] registered Sep 13 00:06:24.919352 kernel: acpiphp: Slot [31] registered Sep 13 00:06:24.919368 kernel: PCI host bridge to bus 0000:00 Sep 13 00:06:24.919565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:06:24.919708 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:06:24.919841 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:06:24.919973 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:06:24.920098 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:06:24.925106 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:06:24.925333 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:06:24.925479 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:06:24.925624 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:06:24.925752 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:06:24.925878 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:06:24.926004 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:06:24.928210 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:06:24.928421 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:06:24.928574 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:06:24.928713 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:06:24.928853 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:06:24.928983 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:06:24.929112 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:06:24.929257 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:06:24.929387 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:06:24.929534 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:06:24.929663 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:06:24.929797 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:06:24.929926 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:06:24.929945 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:06:24.929962 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:06:24.929978 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:06:24.929993 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:06:24.930013 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:06:24.930029 kernel: iommu: Default domain type: Translated Sep 13 00:06:24.930044 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:06:24.930060 kernel: efivars: Registered efivars operations Sep 13 00:06:24.930076 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:06:24.930089 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:06:24.930104 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:06:24.930119 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:06:24.930271 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:06:24.930407 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:06:24.930541 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:06:24.930560 kernel: vgaarb: loaded Sep 13 00:06:24.930577 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:06:24.930593 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:06:24.930609 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:06:24.930624 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:06:24.930641 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:06:24.930662 kernel: pnp: PnP ACPI init Sep 13 00:06:24.930677 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:06:24.930693 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:06:24.930709 kernel: NET: Registered PF_INET protocol family Sep 13 00:06:24.930725 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:06:24.930741 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:06:24.930756 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:06:24.930771 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:06:24.930787 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:06:24.930806 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:06:24.930822 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:24.930837 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:24.930853 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:06:24.930868 kernel: NET: Registered PF_XDP protocol family Sep 13 00:06:24.931026 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:06:24.933207 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:06:24.933358 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:06:24.933473 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:06:24.933596 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:06:24.933737 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:06:24.933756 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:06:24.933771 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:06:24.933786 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:06:24.933800 kernel: clocksource: Switched to clocksource tsc Sep 13 00:06:24.933814 kernel: Initialise system trusted keyrings Sep 13 00:06:24.933827 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:06:24.933846 kernel: Key type asymmetric registered Sep 13 00:06:24.933859 kernel: Asymmetric key parser 'x509' registered Sep 13 00:06:24.933873 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:06:24.933887 kernel: io scheduler mq-deadline registered Sep 13 00:06:24.933900 kernel: io scheduler kyber registered Sep 13 00:06:24.933913 kernel: io scheduler bfq registered Sep 13 00:06:24.933936 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:06:24.933950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:06:24.933962 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:06:24.933978 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:06:24.933992 kernel: i8042: Warning: Keylock active Sep 13 00:06:24.934005 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:06:24.934017 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:06:24.934272 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:06:24.934436 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:06:24.934574 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:06:24 UTC (1757721984) Sep 13 00:06:24.935328 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:06:24.935364 kernel: intel_pstate: CPU model not supported Sep 13 00:06:24.935380 kernel: efifb: probing for efifb Sep 13 00:06:24.935394 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 13 00:06:24.935407 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:06:24.935422 kernel: efifb: scrolling: redraw Sep 13 00:06:24.935436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:06:24.935452 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:06:24.935468 kernel: fb0: EFI VGA frame buffer device Sep 13 00:06:24.935481 kernel: pstore: Using crash dump compression: deflate Sep 13 00:06:24.935501 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:06:24.935517 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:06:24.935533 kernel: Segment Routing with IPv6 Sep 13 00:06:24.935550 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:06:24.935566 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:06:24.935583 kernel: Key type dns_resolver registered Sep 13 00:06:24.935625 kernel: IPI shorthand broadcast: enabled Sep 13 00:06:24.935645 kernel: sched_clock: Marking stable (453002505, 124707416)->(659713411, -82003490) Sep 13 00:06:24.935662 kernel: registered taskstats version 1 Sep 13 00:06:24.935682 kernel: Loading compiled-in X.509 certificates Sep 13 00:06:24.935699 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:06:24.935716 kernel: Key type .fscrypt registered Sep 13 00:06:24.935733 kernel: Key type fscrypt-provisioning registered Sep 13 00:06:24.935750 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:06:24.935767 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:06:24.935784 kernel: ima: No architecture policies found Sep 13 00:06:24.935801 kernel: clk: Disabling unused clocks Sep 13 00:06:24.935818 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:06:24.935839 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:06:24.935856 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:06:24.935872 kernel: Run /init as init process Sep 13 00:06:24.935888 kernel: with arguments: Sep 13 00:06:24.935902 kernel: /init Sep 13 00:06:24.935917 kernel: with environment: Sep 13 00:06:24.935932 kernel: HOME=/ Sep 13 00:06:24.935948 kernel: TERM=linux Sep 13 00:06:24.935964 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:06:24.935989 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:24.936008 systemd[1]: Detected virtualization amazon. Sep 13 00:06:24.936025 systemd[1]: Detected architecture x86-64. Sep 13 00:06:24.936042 systemd[1]: Running in initrd. Sep 13 00:06:24.936058 systemd[1]: No hostname configured, using default hostname. Sep 13 00:06:24.936075 systemd[1]: Hostname set to . Sep 13 00:06:24.936095 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:24.936114 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:06:24.936131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:24.937192 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:24.937209 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:06:24.937223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:24.937237 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:06:24.937258 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:06:24.937275 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:06:24.937291 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:06:24.937305 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:24.937320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:24.937335 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:24.937353 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:24.937367 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:24.937382 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:24.937398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:24.937414 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:24.937432 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:06:24.937450 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:06:24.937468 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:24.937489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:24.937507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:24.937525 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:24.937542 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:06:24.937560 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:24.937578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:06:24.937595 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:06:24.937611 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:24.937628 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:24.937649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:24.937666 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:24.937684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:24.937735 systemd-journald[178]: Collecting audit messages is disabled. Sep 13 00:06:24.937776 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:06:24.937794 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:24.937812 systemd-journald[178]: Journal started Sep 13 00:06:24.937850 systemd-journald[178]: Runtime Journal (/run/log/journal/ec233d5c32255ac055f618400896f7bc) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:06:24.935340 systemd-modules-load[179]: Inserted module 'overlay' Sep 13 00:06:24.943852 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:24.948785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:24.950699 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:24.961383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:24.965341 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:24.974061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:24.986249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:06:24.991187 kernel: Bridge firewalling registered Sep 13 00:06:24.991769 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 13 00:06:24.994175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:25.003475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:25.009856 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:06:25.004468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:25.009681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:25.017410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:25.028348 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:06:25.029343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:25.033359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:25.044498 dracut-cmdline[211]: dracut-dracut-053 Sep 13 00:06:25.047661 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:25.086206 systemd-resolved[215]: Positive Trust Anchors: Sep 13 00:06:25.086224 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:25.086288 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:25.095833 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 13 00:06:25.097960 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:25.099314 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:25.136168 kernel: SCSI subsystem initialized Sep 13 00:06:25.146161 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:06:25.157163 kernel: iscsi: registered transport (tcp) Sep 13 00:06:25.178401 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:06:25.178488 kernel: QLogic iSCSI HBA Driver Sep 13 00:06:25.216739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:25.223314 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:06:25.247358 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:06:25.247437 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:06:25.250173 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:06:25.290171 kernel: raid6: avx512x4 gen() 15307 MB/s Sep 13 00:06:25.308160 kernel: raid6: avx512x2 gen() 15260 MB/s Sep 13 00:06:25.326181 kernel: raid6: avx512x1 gen() 14408 MB/s Sep 13 00:06:25.344161 kernel: raid6: avx2x4 gen() 15169 MB/s Sep 13 00:06:25.362164 kernel: raid6: avx2x2 gen() 15206 MB/s Sep 13 00:06:25.380316 kernel: raid6: avx2x1 gen() 11465 MB/s Sep 13 00:06:25.380371 kernel: raid6: using algorithm avx512x4 gen() 15307 MB/s Sep 13 00:06:25.399288 kernel: raid6: .... xor() 7810 MB/s, rmw enabled Sep 13 00:06:25.399333 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:06:25.420171 kernel: xor: automatically using best checksumming function avx Sep 13 00:06:25.576173 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:06:25.586119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:25.591348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:25.615529 systemd-udevd[398]: Using default interface naming scheme 'v255'. Sep 13 00:06:25.620727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:25.629400 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:06:25.648198 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 13 00:06:25.679015 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:25.683457 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:25.735039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:25.745571 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:06:25.764479 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:25.769688 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:25.771627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:25.772216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:25.781377 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:06:25.813758 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:25.842062 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:06:25.845316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:25.846310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:25.848420 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:25.848975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:25.849260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:25.849830 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:25.859582 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:25.868215 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:25.868361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:25.883164 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:06:25.883473 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:06:25.883920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:25.898196 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:06:25.898470 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:06:25.903053 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:06:25.903111 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:06:25.905426 kernel: AES CTR mode by8 optimization enabled Sep 13 00:06:25.916319 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:77:4e:de:aa:7f Sep 13 00:06:25.915328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:25.921500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:25.926362 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:06:25.934612 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:06:25.934685 kernel: GPT:9289727 != 16777215 Sep 13 00:06:25.934705 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:06:25.936163 kernel: GPT:9289727 != 16777215 Sep 13 00:06:25.939358 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:06:25.939427 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:25.946695 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:25.964418 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:26.012271 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (454) Sep 13 00:06:26.025174 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (444) Sep 13 00:06:26.074315 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 13 00:06:26.085858 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 13 00:06:26.091959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:06:26.101688 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 13 00:06:26.102189 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 13 00:06:26.106295 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:06:26.114606 disk-uuid[633]: Primary Header is updated. Sep 13 00:06:26.114606 disk-uuid[633]: Secondary Entries is updated. Sep 13 00:06:26.114606 disk-uuid[633]: Secondary Header is updated. Sep 13 00:06:26.121161 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:26.130158 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:26.135226 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:27.141037 disk-uuid[634]: The operation has completed successfully. Sep 13 00:06:27.141983 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:27.275499 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:06:27.275627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:06:27.297357 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:06:27.303298 sh[977]: Success Sep 13 00:06:27.318233 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:06:27.423003 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:06:27.430253 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:06:27.432539 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:06:27.468268 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:06:27.468331 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:27.471482 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:06:27.471538 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:06:27.472727 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:06:27.515819 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:06:27.529210 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:06:27.530484 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:06:27.541420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:06:27.545355 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:06:27.569831 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:27.569901 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:27.571526 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:06:27.580242 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:06:27.596607 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:27.596113 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:06:27.603817 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:06:27.612378 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:06:27.650964 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:27.657392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:27.688955 systemd-networkd[1169]: lo: Link UP Sep 13 00:06:27.688968 systemd-networkd[1169]: lo: Gained carrier Sep 13 00:06:27.691063 systemd-networkd[1169]: Enumeration completed Sep 13 00:06:27.691271 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:27.691970 systemd[1]: Reached target network.target - Network. Sep 13 00:06:27.692152 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:27.692156 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:27.696211 systemd-networkd[1169]: eth0: Link UP Sep 13 00:06:27.696216 systemd-networkd[1169]: eth0: Gained carrier Sep 13 00:06:27.696231 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:27.710243 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.25.42/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:06:27.893929 ignition[1110]: Ignition 2.19.0 Sep 13 00:06:27.893944 ignition[1110]: Stage: fetch-offline Sep 13 00:06:27.894152 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:27.894162 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:27.894473 ignition[1110]: Ignition finished successfully Sep 13 00:06:27.896121 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:27.899451 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:06:27.915741 ignition[1177]: Ignition 2.19.0 Sep 13 00:06:27.915753 ignition[1177]: Stage: fetch Sep 13 00:06:27.916093 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:27.916103 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:27.916225 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:27.933683 ignition[1177]: PUT result: OK Sep 13 00:06:27.935530 ignition[1177]: parsed url from cmdline: "" Sep 13 00:06:27.935540 ignition[1177]: no config URL provided Sep 13 00:06:27.935547 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:27.935559 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:27.935577 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:27.936335 ignition[1177]: PUT result: OK Sep 13 00:06:27.936396 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:06:27.937169 ignition[1177]: GET result: OK Sep 13 00:06:27.937243 ignition[1177]: parsing config with SHA512: 1c9709d8158f01e1ee40eb6d5269972ba59a12b8bda9dbf0dfa001aa0e9a2a5f390f27536c50b4d1f04505267ef4e2c523209eccb1cfae3da2d0f2cc134bb388 Sep 13 00:06:27.941615 unknown[1177]: fetched base config from "system" Sep 13 00:06:27.943167 unknown[1177]: fetched base config from "system" Sep 13 00:06:27.943300 unknown[1177]: fetched user config from "aws" Sep 13 00:06:27.944488 ignition[1177]: fetch: fetch complete Sep 13 00:06:27.945022 ignition[1177]: fetch: fetch passed Sep 13 00:06:27.945099 ignition[1177]: Ignition finished successfully Sep 13 00:06:27.946855 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:06:27.951434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:06:27.979609 ignition[1184]: Ignition 2.19.0 Sep 13 00:06:27.979623 ignition[1184]: Stage: kargs Sep 13 00:06:27.980117 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:27.980131 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:27.980294 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:27.981620 ignition[1184]: PUT result: OK Sep 13 00:06:27.985522 ignition[1184]: kargs: kargs passed Sep 13 00:06:27.985597 ignition[1184]: Ignition finished successfully Sep 13 00:06:27.987834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:06:27.993345 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:06:28.006860 ignition[1190]: Ignition 2.19.0 Sep 13 00:06:28.006875 ignition[1190]: Stage: disks Sep 13 00:06:28.007501 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:28.007515 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:28.007637 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:28.008501 ignition[1190]: PUT result: OK Sep 13 00:06:28.011103 ignition[1190]: disks: disks passed Sep 13 00:06:28.011336 ignition[1190]: Ignition finished successfully Sep 13 00:06:28.012507 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:06:28.013572 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:28.014131 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:06:28.014706 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:28.015406 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:28.015657 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:28.023372 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:06:28.062816 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:06:28.066107 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:06:28.070283 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:06:28.169471 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:06:28.170103 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:06:28.171326 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:28.184258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:28.186995 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:06:28.189046 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:06:28.189423 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:06:28.189458 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:28.197014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:06:28.203458 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:06:28.209164 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1217) Sep 13 00:06:28.213256 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:28.213328 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:28.213348 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:06:28.228186 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:06:28.229471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:28.333026 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:06:28.352832 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:06:28.357115 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:06:28.364052 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:06:28.537500 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:28.542279 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:06:28.546333 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:06:28.555015 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:06:28.556294 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:28.597215 ignition[1332]: INFO : Ignition 2.19.0 Sep 13 00:06:28.597215 ignition[1332]: INFO : Stage: mount Sep 13 00:06:28.597215 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:28.597215 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:28.597215 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:28.601046 ignition[1332]: INFO : PUT result: OK Sep 13 00:06:28.597890 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:06:28.602681 ignition[1332]: INFO : mount: mount passed Sep 13 00:06:28.603847 ignition[1332]: INFO : Ignition finished successfully Sep 13 00:06:28.604778 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:06:28.609293 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:06:28.624383 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:28.642173 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1344) Sep 13 00:06:28.646248 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:28.646318 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:28.646339 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:06:28.652169 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:06:28.654783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:28.682919 ignition[1360]: INFO : Ignition 2.19.0 Sep 13 00:06:28.682919 ignition[1360]: INFO : Stage: files Sep 13 00:06:28.684514 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:28.684514 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:28.684514 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:28.684514 ignition[1360]: INFO : PUT result: OK Sep 13 00:06:28.687014 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:06:28.687976 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:06:28.687976 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:06:28.704314 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:06:28.705311 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:06:28.705311 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:06:28.704857 unknown[1360]: wrote ssh authorized keys file for user: core Sep 13 00:06:28.707985 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:06:28.707985 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:06:28.707985 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:06:28.707985 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:06:28.802700 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:06:29.571386 systemd-networkd[1169]: eth0: Gained IPv6LL Sep 13 00:06:30.260908 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:06:30.262388 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:30.262388 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:30.262388 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:30.262388 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:30.262388 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:30.265736 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:06:30.539349 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:06:31.030728 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:06:31.030728 ignition[1360]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:06:31.033158 ignition[1360]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:31.035415 ignition[1360]: INFO : files: files passed Sep 13 00:06:31.035415 ignition[1360]: INFO : Ignition finished successfully Sep 13 00:06:31.035996 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:06:31.041524 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:06:31.047494 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:06:31.052573 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:06:31.052699 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:06:31.065004 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:31.065004 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:31.068218 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:31.070492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:31.071272 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:06:31.076334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:06:31.106450 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:06:31.106565 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:06:31.107632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:06:31.108509 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:06:31.109251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:06:31.113303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:06:31.128175 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:31.134359 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:06:31.146604 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:31.147459 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:31.148632 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:06:31.149512 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:06:31.149693 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:31.150896 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:06:31.151814 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:06:31.152608 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:06:31.153462 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:31.154225 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:31.154974 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:06:31.155804 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:31.156581 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:06:31.157683 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:06:31.158422 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:06:31.159282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:06:31.159471 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:31.160501 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:31.161313 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:31.161953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:06:31.162083 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:31.162732 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:06:31.162945 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:31.164345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:06:31.164572 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:31.165287 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:06:31.165442 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:06:31.173499 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:06:31.176554 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:06:31.177650 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:06:31.177900 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:31.180577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:06:31.182309 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:31.197132 ignition[1413]: INFO : Ignition 2.19.0 Sep 13 00:06:31.198600 ignition[1413]: INFO : Stage: umount Sep 13 00:06:31.198600 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:31.198600 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:31.197662 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:06:31.202323 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:31.202323 ignition[1413]: INFO : PUT result: OK Sep 13 00:06:31.197798 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:06:31.205639 ignition[1413]: INFO : umount: umount passed Sep 13 00:06:31.207317 ignition[1413]: INFO : Ignition finished successfully Sep 13 00:06:31.208950 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:06:31.209162 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:06:31.211592 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:06:31.211653 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:06:31.212211 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:06:31.212271 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:06:31.212779 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:06:31.212829 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:06:31.216634 systemd[1]: Stopped target network.target - Network. Sep 13 00:06:31.217080 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:06:31.217211 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:31.217681 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:06:31.218092 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:06:31.218632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:31.218935 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:06:31.219492 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:06:31.219947 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:06:31.220000 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:31.221486 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:06:31.221525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:31.221879 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:06:31.221925 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:06:31.222584 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:06:31.222627 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:31.223457 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:06:31.224558 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:06:31.225983 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:06:31.228224 systemd-networkd[1169]: eth0: DHCPv6 lease lost Sep 13 00:06:31.231633 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:06:31.231777 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:06:31.232941 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:06:31.233056 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:06:31.234980 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:06:31.235192 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:06:31.239730 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:06:31.239797 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:31.240590 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:06:31.240658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:31.247328 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:06:31.248712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:06:31.248800 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:31.249446 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:31.249512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:31.250106 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:06:31.250174 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:31.251073 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:06:31.251133 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:31.253432 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:31.266129 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:06:31.266294 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:06:31.267603 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:06:31.267747 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:31.269322 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:06:31.269391 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:31.270217 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:06:31.270266 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:31.270817 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:06:31.270862 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:31.272051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:06:31.272102 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:31.273376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:31.273426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:31.280385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:06:31.281026 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:06:31.281109 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:31.281788 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:06:31.281849 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:31.282511 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:06:31.282565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:31.283484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:31.283539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:31.292576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:06:31.292726 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:06:31.294009 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:06:31.299424 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:06:31.321874 systemd[1]: Switching root. Sep 13 00:06:31.353665 systemd-journald[178]: Journal stopped Sep 13 00:06:32.912027 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 13 00:06:32.912105 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:06:32.912155 kernel: SELinux: policy capability open_perms=1 Sep 13 00:06:32.912182 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:06:32.912207 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:06:32.912226 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:06:32.912249 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:06:32.912275 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:06:32.912298 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:06:32.912317 kernel: audit: type=1403 audit(1757721991.846:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:06:32.912338 systemd[1]: Successfully loaded SELinux policy in 42.820ms. Sep 13 00:06:32.912370 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.327ms. Sep 13 00:06:32.912397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:32.912421 systemd[1]: Detected virtualization amazon. Sep 13 00:06:32.912442 systemd[1]: Detected architecture x86-64. Sep 13 00:06:32.912461 systemd[1]: Detected first boot. Sep 13 00:06:32.912478 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:32.912496 zram_generator::config[1474]: No configuration found. Sep 13 00:06:32.912516 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:06:32.912534 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:06:32.912563 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 13 00:06:32.912583 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:06:32.912602 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:06:32.912620 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:06:32.912638 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:06:32.912658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:06:32.912676 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:06:32.912695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:06:32.912716 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:06:32.912735 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:32.912754 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:32.912773 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:06:32.912791 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:06:32.912810 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:06:32.912831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:32.912850 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:06:32.912869 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:32.912887 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:06:32.912909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:32.912933 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:32.912952 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:32.912971 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:32.912990 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:06:32.913008 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:06:32.913026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:06:32.913047 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:06:32.913069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:32.913087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:32.913105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:32.913123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:06:32.913153 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:06:32.913173 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:06:32.913191 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:06:32.913209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:32.913228 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:06:32.913252 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:06:32.913272 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:06:32.913294 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:06:32.913315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:32.913335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:32.913356 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:06:32.913376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:32.913397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:32.913421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:32.913442 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:06:32.913463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:32.913484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:06:32.913505 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:06:32.913527 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:06:32.913547 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:32.913567 kernel: ACPI: bus type drm_connector registered Sep 13 00:06:32.913588 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:32.913611 kernel: fuse: init (API version 7.39) Sep 13 00:06:32.913629 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:06:32.913648 kernel: loop: module loaded Sep 13 00:06:32.913667 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:06:32.913690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:32.913712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:32.913734 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:06:32.913755 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:06:32.913816 systemd-journald[1580]: Collecting audit messages is disabled. Sep 13 00:06:32.913861 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:06:32.913884 systemd-journald[1580]: Journal started Sep 13 00:06:32.913925 systemd-journald[1580]: Runtime Journal (/run/log/journal/ec233d5c32255ac055f618400896f7bc) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:06:32.918257 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:32.920437 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:06:32.921646 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:06:32.922319 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:06:32.923465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:06:32.924654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:32.925679 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:06:32.925920 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:06:32.926917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:32.927359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:32.928602 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:32.928848 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:32.930523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:32.930777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:32.932312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:06:32.932561 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:06:32.934059 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:32.934328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:32.935727 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:32.937264 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:06:32.941739 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:06:32.956260 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:06:32.962452 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:06:32.971252 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:06:32.972027 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:06:32.983212 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:06:32.995326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:06:32.997358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:33.008336 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:06:33.010364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:33.016621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:33.024247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:33.033433 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:06:33.038969 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:06:33.040815 systemd-journald[1580]: Time spent on flushing to /var/log/journal/ec233d5c32255ac055f618400896f7bc is 61.241ms for 974 entries. Sep 13 00:06:33.040815 systemd-journald[1580]: System Journal (/var/log/journal/ec233d5c32255ac055f618400896f7bc) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:06:33.109396 systemd-journald[1580]: Received client request to flush runtime journal. Sep 13 00:06:33.045600 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:06:33.052731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:33.058127 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:06:33.073418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:06:33.113500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:33.114729 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:06:33.128001 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:06:33.139329 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Sep 13 00:06:33.139358 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Sep 13 00:06:33.147710 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:33.156444 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:06:33.209408 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:06:33.218441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:33.240853 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Sep 13 00:06:33.240883 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Sep 13 00:06:33.248077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:33.811568 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:06:33.816323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:33.843756 systemd-udevd[1653]: Using default interface naming scheme 'v255'. Sep 13 00:06:33.883460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:33.894458 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:33.908323 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:06:33.940447 (udev-worker)[1664]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:33.942127 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 13 00:06:33.982489 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:06:34.027541 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:06:34.046211 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:06:34.049819 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:06:34.049876 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:06:34.052309 systemd-networkd[1657]: lo: Link UP Sep 13 00:06:34.052729 systemd-networkd[1657]: lo: Gained carrier Sep 13 00:06:34.054787 systemd-networkd[1657]: Enumeration completed Sep 13 00:06:34.057733 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:06:34.056262 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:34.061345 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:34.061352 systemd-networkd[1657]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:34.064109 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:06:34.068866 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:34.068904 systemd-networkd[1657]: eth0: Link UP Sep 13 00:06:34.069069 systemd-networkd[1657]: eth0: Gained carrier Sep 13 00:06:34.069080 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:34.079540 systemd-networkd[1657]: eth0: DHCPv4 address 172.31.25.42/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:06:34.095190 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:06:34.095505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:34.111154 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:06:34.115242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:34.115495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:34.118169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1665) Sep 13 00:06:34.122470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:34.276012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:06:34.277178 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:06:34.287475 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:06:34.288285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:34.314613 lvm[1778]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:34.343205 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:06:34.344359 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:34.350462 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:06:34.356884 lvm[1783]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:34.383366 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:06:34.384482 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:06:34.384923 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:06:34.384943 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:34.385319 systemd[1]: Reached target machines.target - Containers. Sep 13 00:06:34.386955 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:06:34.392464 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:06:34.395884 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:06:34.396872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:34.399422 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:06:34.406384 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:06:34.418347 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:06:34.421716 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:06:34.453777 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:06:34.455649 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:06:34.461874 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:06:34.469168 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:06:34.551353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:06:34.571340 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:06:34.673287 kernel: loop2: detected capacity change from 0 to 140768 Sep 13 00:06:34.772171 kernel: loop3: detected capacity change from 0 to 61336 Sep 13 00:06:34.832168 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:06:34.858160 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:06:34.886198 kernel: loop6: detected capacity change from 0 to 140768 Sep 13 00:06:34.912161 kernel: loop7: detected capacity change from 0 to 61336 Sep 13 00:06:34.945101 (sd-merge)[1804]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 13 00:06:34.945989 (sd-merge)[1804]: Merged extensions into '/usr'. Sep 13 00:06:34.956818 systemd[1]: Reloading requested from client PID 1791 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:06:34.956837 systemd[1]: Reloading... Sep 13 00:06:35.060300 zram_generator::config[1832]: No configuration found. Sep 13 00:06:35.229476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:35.341505 systemd[1]: Reloading finished in 382 ms. Sep 13 00:06:35.344252 ldconfig[1787]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:06:35.357634 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:06:35.358739 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:06:35.378350 systemd[1]: Starting ensure-sysext.service... Sep 13 00:06:35.382353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:35.387769 systemd[1]: Reloading requested from client PID 1891 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:06:35.387938 systemd[1]: Reloading... Sep 13 00:06:35.418304 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:06:35.418840 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:06:35.419849 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:06:35.420257 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Sep 13 00:06:35.420348 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Sep 13 00:06:35.424812 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:35.424974 systemd-tmpfiles[1892]: Skipping /boot Sep 13 00:06:35.446629 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:35.447334 systemd-tmpfiles[1892]: Skipping /boot Sep 13 00:06:35.475158 zram_generator::config[1917]: No configuration found. Sep 13 00:06:35.587326 systemd-networkd[1657]: eth0: Gained IPv6LL Sep 13 00:06:35.632879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:35.706243 systemd[1]: Reloading finished in 317 ms. Sep 13 00:06:35.723915 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:06:35.729718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:35.739395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:35.743367 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:06:35.753439 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:06:35.760452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:35.772444 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:06:35.781999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:35.782432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:35.786894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:35.796542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:35.804013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:35.805144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:35.805765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:35.811037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:35.811838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:35.816111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:35.820593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:35.835265 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:35.837382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:35.849819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:35.850744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:35.857727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:35.866531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:35.873050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:35.892166 augenrules[2020]: No rules Sep 13 00:06:35.898025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:35.900901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:35.902673 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:06:35.906703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:35.911985 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:35.916915 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:06:35.919874 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:06:35.922035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:35.922297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:35.925100 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:35.925334 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:35.926839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:35.927193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:35.943212 systemd[1]: Finished ensure-sysext.service. Sep 13 00:06:35.947224 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:35.947486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:35.958913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:35.959008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:35.965393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:06:35.983232 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:06:35.987396 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:35.990807 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:06:35.997572 systemd-resolved[1989]: Positive Trust Anchors: Sep 13 00:06:35.997596 systemd-resolved[1989]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:35.997644 systemd-resolved[1989]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:36.002762 systemd-resolved[1989]: Defaulting to hostname 'linux'. Sep 13 00:06:36.004774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:36.005331 systemd[1]: Reached target network.target - Network. Sep 13 00:06:36.005710 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:06:36.006052 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:36.006385 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:36.006806 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:06:36.007192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:06:36.007693 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:06:36.008068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:06:36.008378 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:06:36.008688 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:36.008719 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:36.008989 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:36.010714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:06:36.012807 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:06:36.014761 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:06:36.020374 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:06:36.020951 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:36.021414 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:36.022004 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:06:36.022046 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:36.022068 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:36.024191 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:06:36.026391 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:06:36.029478 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:06:36.033292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:06:36.039452 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:06:36.040203 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:06:36.059609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:36.066022 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:06:36.073329 systemd[1]: Started ntpd.service - Network Time Service. Sep 13 00:06:36.081255 jq[2049]: false Sep 13 00:06:36.085858 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:06:36.090218 extend-filesystems[2051]: Found loop4 Sep 13 00:06:36.090218 extend-filesystems[2051]: Found loop5 Sep 13 00:06:36.090218 extend-filesystems[2051]: Found loop6 Sep 13 00:06:36.090218 extend-filesystems[2051]: Found loop7 Sep 13 00:06:36.090218 extend-filesystems[2051]: Found nvme0n1 Sep 13 00:06:36.090218 extend-filesystems[2051]: Found nvme0n1p1 Sep 13 00:06:36.113095 extend-filesystems[2051]: Found nvme0n1p2 Sep 13 00:06:36.113095 extend-filesystems[2051]: Found nvme0n1p3 Sep 13 00:06:36.113095 extend-filesystems[2051]: Found usr Sep 13 00:06:36.113095 extend-filesystems[2051]: Found nvme0n1p4 Sep 13 00:06:36.113095 extend-filesystems[2051]: Found nvme0n1p6 Sep 13 00:06:36.113095 extend-filesystems[2051]: Found nvme0n1p7 Sep 13 00:06:36.113095 extend-filesystems[2051]: Found nvme0n1p9 Sep 13 00:06:36.113095 extend-filesystems[2051]: Checking size of /dev/nvme0n1p9 Sep 13 00:06:36.113095 extend-filesystems[2051]: Resized partition /dev/nvme0n1p9 Sep 13 00:06:36.100257 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:06:36.114354 dbus-daemon[2048]: [system] SELinux support is enabled Sep 13 00:06:36.120922 extend-filesystems[2069]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:06:36.109344 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 13 00:06:36.123240 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:06:36.120290 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:06:36.122429 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:06:36.132109 dbus-daemon[2048]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1657 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:06:36.146235 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:06:36.146924 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:36.149264 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:06:36.162739 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:06:36.165935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:06:36.181728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:36.181963 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:06:36.190161 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:06:36.210692 jq[2079]: true Sep 13 00:06:36.216544 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:36.219346 extend-filesystems[2069]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:06:36.219346 extend-filesystems[2069]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:06:36.219346 extend-filesystems[2069]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:06:36.216787 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:06:36.231296 extend-filesystems[2051]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:06:36.218573 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:36.218802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:06:36.250020 ntpd[2058]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:06:36.250349 ntpd[2058]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:06:36.251567 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: ---------------------------------------------------- Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: corporation. Support and training for ntp-4 are Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: available at https://www.nwtime.org/support Sep 13 00:06:36.252436 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: ---------------------------------------------------- Sep 13 00:06:36.250358 ntpd[2058]: ---------------------------------------------------- Sep 13 00:06:36.250365 ntpd[2058]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:06:36.250372 ntpd[2058]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:06:36.250378 ntpd[2058]: corporation. Support and training for ntp-4 are Sep 13 00:06:36.250384 ntpd[2058]: available at https://www.nwtime.org/support Sep 13 00:06:36.250391 ntpd[2058]: ---------------------------------------------------- Sep 13 00:06:36.253009 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:06:36.273705 update_engine[2078]: I20250913 00:06:36.259029 2078 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: proto: precision = 0.053 usec (-24) Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: basedate set to 2025-08-31 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: gps base set to 2025-08-31 (week 2382) Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listen normally on 3 eth0 172.31.25.42:123 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listen normally on 4 lo [::1]:123 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listen normally on 5 eth0 [fe80::477:4eff:fede:aa7f%2]:123 Sep 13 00:06:36.282375 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: Listening on routing socket on fd #22 for interface updates Sep 13 00:06:36.255407 ntpd[2058]: proto: precision = 0.053 usec (-24) Sep 13 00:06:36.254293 systemd-logind[2075]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:06:36.269228 ntpd[2058]: basedate set to 2025-08-31 Sep 13 00:06:36.254311 systemd-logind[2075]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:06:36.269248 ntpd[2058]: gps base set to 2025-08-31 (week 2382) Sep 13 00:06:36.254329 systemd-logind[2075]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:06:36.277882 ntpd[2058]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:06:36.254585 (ntainerd)[2108]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:06:36.277926 ntpd[2058]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:06:36.256314 systemd-logind[2075]: New seat seat0. Sep 13 00:06:36.278077 ntpd[2058]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:06:36.261281 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:06:36.278104 ntpd[2058]: Listen normally on 3 eth0 172.31.25.42:123 Sep 13 00:06:36.275439 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:06:36.278155 ntpd[2058]: Listen normally on 4 lo [::1]:123 Sep 13 00:06:36.281777 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 13 00:06:36.278188 ntpd[2058]: Listen normally on 5 eth0 [fe80::477:4eff:fede:aa7f%2]:123 Sep 13 00:06:36.292392 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 13 00:06:36.278216 ntpd[2058]: Listening on routing socket on fd #22 for interface updates Sep 13 00:06:36.292665 dbus-daemon[2048]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:06:36.294848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:36.296511 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:06:36.309347 jq[2102]: true Sep 13 00:06:36.309433 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:06:36.309433 ntpd[2058]: 13 Sep 00:06:36 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:06:36.294876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:06:36.296538 ntpd[2058]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:06:36.295305 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:36.295320 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:06:36.319489 tar[2096]: linux-amd64/helm Sep 13 00:06:36.325058 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1665) Sep 13 00:06:36.325088 update_engine[2078]: I20250913 00:06:36.319314 2078 update_check_scheduler.cc:74] Next update check in 4m11s Sep 13 00:06:36.324420 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 00:06:36.332922 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:06:36.336468 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:36.347226 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:06:36.358738 coreos-metadata[2047]: Sep 13 00:06:36.357 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:06:36.363122 coreos-metadata[2047]: Sep 13 00:06:36.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 13 00:06:36.366380 coreos-metadata[2047]: Sep 13 00:06:36.366 INFO Fetch successful Sep 13 00:06:36.366439 coreos-metadata[2047]: Sep 13 00:06:36.366 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 13 00:06:36.367356 coreos-metadata[2047]: Sep 13 00:06:36.367 INFO Fetch successful Sep 13 00:06:36.367628 coreos-metadata[2047]: Sep 13 00:06:36.367 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 13 00:06:36.369478 coreos-metadata[2047]: Sep 13 00:06:36.369 INFO Fetch successful Sep 13 00:06:36.369478 coreos-metadata[2047]: Sep 13 00:06:36.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 13 00:06:36.370501 coreos-metadata[2047]: Sep 13 00:06:36.369 INFO Fetch successful Sep 13 00:06:36.370501 coreos-metadata[2047]: Sep 13 00:06:36.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 13 00:06:36.371823 coreos-metadata[2047]: Sep 13 00:06:36.371 INFO Fetch failed with 404: resource not found Sep 13 00:06:36.371823 coreos-metadata[2047]: Sep 13 00:06:36.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 13 00:06:36.377154 coreos-metadata[2047]: Sep 13 00:06:36.376 INFO Fetch successful Sep 13 00:06:36.377154 coreos-metadata[2047]: Sep 13 00:06:36.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 13 00:06:36.377154 coreos-metadata[2047]: Sep 13 00:06:36.376 INFO Fetch successful Sep 13 00:06:36.377154 coreos-metadata[2047]: Sep 13 00:06:36.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 13 00:06:36.379635 coreos-metadata[2047]: Sep 13 00:06:36.379 INFO Fetch successful Sep 13 00:06:36.379635 coreos-metadata[2047]: Sep 13 00:06:36.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 13 00:06:36.381838 coreos-metadata[2047]: Sep 13 00:06:36.380 INFO Fetch successful Sep 13 00:06:36.381838 coreos-metadata[2047]: Sep 13 00:06:36.380 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 13 00:06:36.381838 coreos-metadata[2047]: Sep 13 00:06:36.381 INFO Fetch successful Sep 13 00:06:36.430640 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:06:36.431492 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:06:36.436400 bash[2168]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:36.437616 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:06:36.455485 systemd[1]: Starting sshkeys.service... Sep 13 00:06:36.486421 amazon-ssm-agent[2130]: Initializing new seelog logger Sep 13 00:06:36.486710 amazon-ssm-agent[2130]: New Seelog Logger Creation Complete Sep 13 00:06:36.486710 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.486710 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.489346 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 processing appconfig overrides Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 processing appconfig overrides Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 processing appconfig overrides Sep 13 00:06:36.497161 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO Proxy environment variables: Sep 13 00:06:36.516507 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:06:36.528497 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:06:36.535620 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.535620 amazon-ssm-agent[2130]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:06:36.535711 amazon-ssm-agent[2130]: 2025/09/13 00:06:36 processing appconfig overrides Sep 13 00:06:36.598211 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO no_proxy: Sep 13 00:06:36.699424 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO https_proxy: Sep 13 00:06:36.716476 coreos-metadata[2196]: Sep 13 00:06:36.716 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:06:36.717202 coreos-metadata[2196]: Sep 13 00:06:36.717 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 13 00:06:36.719798 coreos-metadata[2196]: Sep 13 00:06:36.719 INFO Fetch successful Sep 13 00:06:36.719896 coreos-metadata[2196]: Sep 13 00:06:36.719 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:06:36.723220 coreos-metadata[2196]: Sep 13 00:06:36.723 INFO Fetch successful Sep 13 00:06:36.725582 unknown[2196]: wrote ssh authorized keys file for user: core Sep 13 00:06:36.797449 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO http_proxy: Sep 13 00:06:36.806469 dbus-daemon[2048]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:06:36.806617 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 00:06:36.822231 update-ssh-keys[2257]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:36.816588 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:06:36.825241 systemd[1]: Finished sshkeys.service. Sep 13 00:06:36.826830 dbus-daemon[2048]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2136 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:06:36.836504 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 00:06:36.895104 locksmithd[2138]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:36.896069 polkitd[2279]: Started polkitd version 121 Sep 13 00:06:36.899965 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO Checking if agent identity type OnPrem can be assumed Sep 13 00:06:36.921341 polkitd[2279]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:06:36.921421 polkitd[2279]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:06:36.929263 polkitd[2279]: Finished loading, compiling and executing 2 rules Sep 13 00:06:36.929715 dbus-daemon[2048]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:06:36.929889 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 00:06:36.938311 polkitd[2279]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:06:36.993646 systemd-hostnamed[2136]: Hostname set to (transient) Sep 13 00:06:36.993692 systemd-resolved[1989]: System hostname changed to 'ip-172-31-25-42'. Sep 13 00:06:37.000252 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO Checking if agent identity type EC2 can be assumed Sep 13 00:06:37.096762 containerd[2108]: time="2025-09-13T00:06:37.096640039Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:06:37.097240 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO Agent will take identity from EC2 Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] Starting Core Agent Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [Registrar] Starting registrar module Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:37 INFO [EC2Identity] EC2 registration was successful. Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:37 INFO [CredentialRefresher] credentialRefresher has started Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:37 INFO [CredentialRefresher] Starting credentials refresher loop Sep 13 00:06:37.156330 amazon-ssm-agent[2130]: 2025-09-13 00:06:37 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 13 00:06:37.168883 containerd[2108]: time="2025-09-13T00:06:37.166953168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173090861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173182700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173210336Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173398268Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173421040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173493711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173511407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173803540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173825091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173845836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:37.174659 containerd[2108]: time="2025-09-13T00:06:37.173862742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.175168 containerd[2108]: time="2025-09-13T00:06:37.173965639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.175168 containerd[2108]: time="2025-09-13T00:06:37.174235582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:37.175168 containerd[2108]: time="2025-09-13T00:06:37.174449403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:37.175168 containerd[2108]: time="2025-09-13T00:06:37.174470481Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:37.175168 containerd[2108]: time="2025-09-13T00:06:37.174563111Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:37.175168 containerd[2108]: time="2025-09-13T00:06:37.174617488Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:37.185401 containerd[2108]: time="2025-09-13T00:06:37.184759103Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:37.185401 containerd[2108]: time="2025-09-13T00:06:37.184849432Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:37.185401 containerd[2108]: time="2025-09-13T00:06:37.184878815Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:06:37.185401 containerd[2108]: time="2025-09-13T00:06:37.184942912Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:06:37.185401 containerd[2108]: time="2025-09-13T00:06:37.184966428Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:37.185401 containerd[2108]: time="2025-09-13T00:06:37.185170256Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186081242Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186243564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186265600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186292506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186314838Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186335207Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186356661Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186379044Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186401652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186421919Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186442000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186460916Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186489193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.187997 containerd[2108]: time="2025-09-13T00:06:37.186510857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186530004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186551303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186571751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186592047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186610558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186630993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186656193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186678053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186697200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186718604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186738527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186762713Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186790899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186810152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.188579 containerd[2108]: time="2025-09-13T00:06:37.186829049Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186878941Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186902848Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186919304Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186937671Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186953274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186971036Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.186991518Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:06:37.189116 containerd[2108]: time="2025-09-13T00:06:37.187007288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:37.192218 containerd[2108]: time="2025-09-13T00:06:37.190390454Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:37.192218 containerd[2108]: time="2025-09-13T00:06:37.190487752Z" level=info msg="Connect containerd service" Sep 13 00:06:37.192218 containerd[2108]: time="2025-09-13T00:06:37.190549001Z" level=info msg="using legacy CRI server" Sep 13 00:06:37.192218 containerd[2108]: time="2025-09-13T00:06:37.190561371Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:06:37.192218 containerd[2108]: time="2025-09-13T00:06:37.190708766Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.195353505Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.195846900Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.195924863Z" level=info msg="Start recovering state" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.196012100Z" level=info msg="Start event monitor" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.196032585Z" level=info msg="Start snapshots syncer" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.196045811Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:37.196132 containerd[2108]: time="2025-09-13T00:06:37.196057949Z" level=info msg="Start streaming server" Sep 13 00:06:37.197369 amazon-ssm-agent[2130]: 2025-09-13 00:06:37 INFO [CredentialRefresher] Next credential rotation will be in 30.54166006995 minutes Sep 13 00:06:37.197442 containerd[2108]: time="2025-09-13T00:06:37.197367005Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:37.197442 containerd[2108]: time="2025-09-13T00:06:37.197428836Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:37.198350 containerd[2108]: time="2025-09-13T00:06:37.197495081Z" level=info msg="containerd successfully booted in 0.102686s" Sep 13 00:06:37.197637 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:06:37.370162 sshd_keygen[2106]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:37.420439 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:06:37.434416 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:06:37.450890 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:37.452607 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:06:37.467235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:06:37.488764 tar[2096]: linux-amd64/LICENSE Sep 13 00:06:37.488764 tar[2096]: linux-amd64/README.md Sep 13 00:06:37.499013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:06:37.505428 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:06:37.515950 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:06:37.523551 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:06:37.525255 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:06:38.168441 amazon-ssm-agent[2130]: 2025-09-13 00:06:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 13 00:06:38.268790 amazon-ssm-agent[2130]: 2025-09-13 00:06:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2318) started Sep 13 00:06:38.369624 amazon-ssm-agent[2130]: 2025-09-13 00:06:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 13 00:06:38.620276 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:06:38.625587 systemd[1]: Started sshd@0-172.31.25.42:22-139.178.89.65:32980.service - OpenSSH per-connection server daemon (139.178.89.65:32980). Sep 13 00:06:38.797653 sshd[2328]: Accepted publickey for core from 139.178.89.65 port 32980 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:38.800573 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:38.815544 systemd-logind[2075]: New session 1 of user core. Sep 13 00:06:38.816774 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:06:38.822478 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:06:38.842956 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:06:38.854535 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:06:38.860998 (systemd)[2334]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:38.982925 systemd[2334]: Queued start job for default target default.target. Sep 13 00:06:38.983345 systemd[2334]: Created slice app.slice - User Application Slice. Sep 13 00:06:38.983364 systemd[2334]: Reached target paths.target - Paths. Sep 13 00:06:38.983376 systemd[2334]: Reached target timers.target - Timers. Sep 13 00:06:38.988271 systemd[2334]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:06:38.996674 systemd[2334]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:06:38.996742 systemd[2334]: Reached target sockets.target - Sockets. Sep 13 00:06:38.996756 systemd[2334]: Reached target basic.target - Basic System. Sep 13 00:06:38.996798 systemd[2334]: Reached target default.target - Main User Target. Sep 13 00:06:38.996826 systemd[2334]: Startup finished in 128ms. Sep 13 00:06:38.997045 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:06:39.006490 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:06:39.036303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:39.037614 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:06:39.040173 systemd[1]: Startup finished in 7.731s (kernel) + 7.235s (userspace) = 14.967s. Sep 13 00:06:39.049830 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:39.151534 systemd[1]: Started sshd@1-172.31.25.42:22-139.178.89.65:32990.service - OpenSSH per-connection server daemon (139.178.89.65:32990). Sep 13 00:06:39.309035 sshd[2359]: Accepted publickey for core from 139.178.89.65 port 32990 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:39.310545 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:39.321096 systemd-logind[2075]: New session 2 of user core. Sep 13 00:06:39.340597 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:06:39.464523 sshd[2359]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:39.467525 systemd[1]: sshd@1-172.31.25.42:22-139.178.89.65:32990.service: Deactivated successfully. Sep 13 00:06:39.471710 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:06:39.472659 systemd-logind[2075]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:06:39.473943 systemd-logind[2075]: Removed session 2. Sep 13 00:06:39.494492 systemd[1]: Started sshd@2-172.31.25.42:22-139.178.89.65:33006.service - OpenSSH per-connection server daemon (139.178.89.65:33006). Sep 13 00:06:39.652170 sshd[2371]: Accepted publickey for core from 139.178.89.65 port 33006 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:39.653496 sshd[2371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:39.657805 systemd-logind[2075]: New session 3 of user core. Sep 13 00:06:39.663460 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:06:39.784396 sshd[2371]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:39.788383 systemd[1]: sshd@2-172.31.25.42:22-139.178.89.65:33006.service: Deactivated successfully. Sep 13 00:06:39.791541 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:06:39.792311 systemd-logind[2075]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:06:39.793373 systemd-logind[2075]: Removed session 3. Sep 13 00:06:39.810438 systemd[1]: Started sshd@3-172.31.25.42:22-139.178.89.65:33020.service - OpenSSH per-connection server daemon (139.178.89.65:33020). Sep 13 00:06:39.961536 sshd[2379]: Accepted publickey for core from 139.178.89.65 port 33020 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:39.962928 sshd[2379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:39.968723 systemd-logind[2075]: New session 4 of user core. Sep 13 00:06:39.975496 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:06:40.098934 sshd[2379]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.101891 systemd[1]: sshd@3-172.31.25.42:22-139.178.89.65:33020.service: Deactivated successfully. Sep 13 00:06:40.108050 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:06:40.109363 systemd-logind[2075]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:06:40.110555 systemd-logind[2075]: Removed session 4. Sep 13 00:06:40.126482 systemd[1]: Started sshd@4-172.31.25.42:22-139.178.89.65:33034.service - OpenSSH per-connection server daemon (139.178.89.65:33034). Sep 13 00:06:40.214501 kubelet[2353]: E0913 00:06:40.214343 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:40.216935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:40.217132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:40.277620 sshd[2387]: Accepted publickey for core from 139.178.89.65 port 33034 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:40.279123 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:40.284255 systemd-logind[2075]: New session 5 of user core. Sep 13 00:06:40.290461 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:06:40.400658 sudo[2394]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:06:40.401047 sudo[2394]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:40.415858 sudo[2394]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:40.438459 sshd[2387]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.442937 systemd[1]: sshd@4-172.31.25.42:22-139.178.89.65:33034.service: Deactivated successfully. Sep 13 00:06:40.446780 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:06:40.447529 systemd-logind[2075]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:06:40.449003 systemd-logind[2075]: Removed session 5. Sep 13 00:06:40.465505 systemd[1]: Started sshd@5-172.31.25.42:22-139.178.89.65:33036.service - OpenSSH per-connection server daemon (139.178.89.65:33036). Sep 13 00:06:40.615824 sshd[2399]: Accepted publickey for core from 139.178.89.65 port 33036 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:40.617346 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:40.621885 systemd-logind[2075]: New session 6 of user core. Sep 13 00:06:40.629532 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:06:40.726130 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:06:40.726548 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:40.730604 sudo[2404]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:40.736188 sudo[2403]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:06:40.736571 sudo[2403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:40.750484 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:40.753629 auditctl[2407]: No rules Sep 13 00:06:40.754025 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:06:40.754307 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:40.761623 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:40.787377 augenrules[2426]: No rules Sep 13 00:06:40.789871 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:40.792895 sudo[2403]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:40.815342 sshd[2399]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.818182 systemd[1]: sshd@5-172.31.25.42:22-139.178.89.65:33036.service: Deactivated successfully. Sep 13 00:06:40.820420 systemd-logind[2075]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:06:40.822412 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:06:40.823397 systemd-logind[2075]: Removed session 6. Sep 13 00:06:40.842465 systemd[1]: Started sshd@6-172.31.25.42:22-139.178.89.65:33052.service - OpenSSH per-connection server daemon (139.178.89.65:33052). Sep 13 00:06:40.995108 sshd[2435]: Accepted publickey for core from 139.178.89.65 port 33052 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:40.995977 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:41.001306 systemd-logind[2075]: New session 7 of user core. Sep 13 00:06:41.010782 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:06:41.106829 sudo[2439]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:06:41.107218 sudo[2439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:41.584484 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:06:41.585785 (dockerd)[2454]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:06:42.115705 dockerd[2454]: time="2025-09-13T00:06:42.115628491Z" level=info msg="Starting up" Sep 13 00:06:42.466314 dockerd[2454]: time="2025-09-13T00:06:42.466201618Z" level=info msg="Loading containers: start." Sep 13 00:06:42.590184 kernel: Initializing XFRM netlink socket Sep 13 00:06:42.619276 (udev-worker)[2480]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:42.680375 systemd-networkd[1657]: docker0: Link UP Sep 13 00:06:42.692508 dockerd[2454]: time="2025-09-13T00:06:42.692464601Z" level=info msg="Loading containers: done." Sep 13 00:06:42.723854 dockerd[2454]: time="2025-09-13T00:06:42.723737508Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:06:42.723854 dockerd[2454]: time="2025-09-13T00:06:42.723848755Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:06:42.724028 dockerd[2454]: time="2025-09-13T00:06:42.723976145Z" level=info msg="Daemon has completed initialization" Sep 13 00:06:42.757772 dockerd[2454]: time="2025-09-13T00:06:42.757715336Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:06:42.758299 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:06:44.545661 systemd-resolved[1989]: Clock change detected. Flushing caches. Sep 13 00:06:45.277215 containerd[2108]: time="2025-09-13T00:06:45.277162945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:06:45.827669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986101830.mount: Deactivated successfully. Sep 13 00:06:47.485413 containerd[2108]: time="2025-09-13T00:06:47.485359910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:47.486513 containerd[2108]: time="2025-09-13T00:06:47.486357358Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:06:47.487736 containerd[2108]: time="2025-09-13T00:06:47.487489400Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:47.490076 containerd[2108]: time="2025-09-13T00:06:47.490033524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:47.491247 containerd[2108]: time="2025-09-13T00:06:47.491219335Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.21401491s" Sep 13 00:06:47.491462 containerd[2108]: time="2025-09-13T00:06:47.491333027Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:06:47.491888 containerd[2108]: time="2025-09-13T00:06:47.491866638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:06:49.277665 containerd[2108]: time="2025-09-13T00:06:49.277469621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:49.279325 containerd[2108]: time="2025-09-13T00:06:49.279260499Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:06:49.281476 containerd[2108]: time="2025-09-13T00:06:49.281415139Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:49.285397 containerd[2108]: time="2025-09-13T00:06:49.285339417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:49.286503 containerd[2108]: time="2025-09-13T00:06:49.286473877Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.794575843s" Sep 13 00:06:49.286755 containerd[2108]: time="2025-09-13T00:06:49.286604189Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:06:49.287219 containerd[2108]: time="2025-09-13T00:06:49.287191244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:06:50.876232 containerd[2108]: time="2025-09-13T00:06:50.876183086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:50.877452 containerd[2108]: time="2025-09-13T00:06:50.877279314Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:06:50.878869 containerd[2108]: time="2025-09-13T00:06:50.878441768Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:50.881169 containerd[2108]: time="2025-09-13T00:06:50.881136842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:50.882415 containerd[2108]: time="2025-09-13T00:06:50.882385912Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.595162289s" Sep 13 00:06:50.882551 containerd[2108]: time="2025-09-13T00:06:50.882536688Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:06:50.883421 containerd[2108]: time="2025-09-13T00:06:50.883391289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:06:51.588504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:51.595252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:51.902333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:51.915510 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:51.993430 kubelet[2675]: E0913 00:06:51.993387 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:51.998891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:51.999135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:52.120508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379616792.mount: Deactivated successfully. Sep 13 00:06:52.677974 containerd[2108]: time="2025-09-13T00:06:52.677909447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:52.679062 containerd[2108]: time="2025-09-13T00:06:52.678847531Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:06:52.681250 containerd[2108]: time="2025-09-13T00:06:52.680150926Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:52.682727 containerd[2108]: time="2025-09-13T00:06:52.682414812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:52.683629 containerd[2108]: time="2025-09-13T00:06:52.683118842Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.799691006s" Sep 13 00:06:52.683629 containerd[2108]: time="2025-09-13T00:06:52.683159089Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:06:52.683799 containerd[2108]: time="2025-09-13T00:06:52.683752836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:06:53.159314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437619084.mount: Deactivated successfully. Sep 13 00:06:54.069875 containerd[2108]: time="2025-09-13T00:06:54.069812838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:54.071149 containerd[2108]: time="2025-09-13T00:06:54.070883599Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:06:54.073730 containerd[2108]: time="2025-09-13T00:06:54.072419266Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:54.075351 containerd[2108]: time="2025-09-13T00:06:54.075318420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:54.076612 containerd[2108]: time="2025-09-13T00:06:54.076577232Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.392795451s" Sep 13 00:06:54.076682 containerd[2108]: time="2025-09-13T00:06:54.076614817Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:06:54.077624 containerd[2108]: time="2025-09-13T00:06:54.077596261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:06:54.505810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72973025.mount: Deactivated successfully. Sep 13 00:06:54.511828 containerd[2108]: time="2025-09-13T00:06:54.511765422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:54.512733 containerd[2108]: time="2025-09-13T00:06:54.512609641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:06:54.513848 containerd[2108]: time="2025-09-13T00:06:54.513796846Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:54.516244 containerd[2108]: time="2025-09-13T00:06:54.516175476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:54.516737 containerd[2108]: time="2025-09-13T00:06:54.516571910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 438.828082ms" Sep 13 00:06:54.516737 containerd[2108]: time="2025-09-13T00:06:54.516603701Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:06:54.517188 containerd[2108]: time="2025-09-13T00:06:54.517078273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:06:54.978527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719410602.mount: Deactivated successfully. Sep 13 00:06:57.176030 containerd[2108]: time="2025-09-13T00:06:57.175394360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:57.182958 containerd[2108]: time="2025-09-13T00:06:57.182680412Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:06:57.191421 containerd[2108]: time="2025-09-13T00:06:57.191350110Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:57.200898 containerd[2108]: time="2025-09-13T00:06:57.200825426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:57.201598 containerd[2108]: time="2025-09-13T00:06:57.201384204Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.684281515s" Sep 13 00:06:57.201598 containerd[2108]: time="2025-09-13T00:06:57.201420438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:07:00.031997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:00.042108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:00.111588 systemd[1]: Reloading requested from client PID 2825 ('systemctl') (unit session-7.scope)... Sep 13 00:07:00.111631 systemd[1]: Reloading... Sep 13 00:07:00.235733 zram_generator::config[2866]: No configuration found. Sep 13 00:07:00.408890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:00.494742 systemd[1]: Reloading finished in 382 ms. Sep 13 00:07:00.533595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:07:00.533899 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:07:00.534349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:00.543254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:00.778588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:00.782164 (kubelet)[2941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:00.828451 kubelet[2941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:00.828451 kubelet[2941]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:00.828451 kubelet[2941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:00.831149 kubelet[2941]: I0913 00:07:00.830580 2941 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:01.142069 kubelet[2941]: I0913 00:07:01.141937 2941 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:07:01.142069 kubelet[2941]: I0913 00:07:01.141973 2941 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:01.147989 kubelet[2941]: I0913 00:07:01.147859 2941 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:07:01.303429 kubelet[2941]: I0913 00:07:01.303296 2941 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:01.305741 kubelet[2941]: E0913 00:07:01.304824 2941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:01.352070 kubelet[2941]: E0913 00:07:01.352018 2941 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:01.352070 kubelet[2941]: I0913 00:07:01.352063 2941 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:01.368256 kubelet[2941]: I0913 00:07:01.368221 2941 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:01.372280 kubelet[2941]: I0913 00:07:01.372232 2941 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:07:01.372511 kubelet[2941]: I0913 00:07:01.372459 2941 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:01.372742 kubelet[2941]: I0913 00:07:01.372507 2941 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-42","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:07:01.372885 kubelet[2941]: I0913 00:07:01.372757 2941 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:01.372885 kubelet[2941]: I0913 00:07:01.372772 2941 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:07:01.372964 kubelet[2941]: I0913 00:07:01.372900 2941 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:01.386098 kubelet[2941]: I0913 00:07:01.385855 2941 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:07:01.386098 kubelet[2941]: I0913 00:07:01.385930 2941 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:01.394487 kubelet[2941]: I0913 00:07:01.394024 2941 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:07:01.394487 kubelet[2941]: I0913 00:07:01.394082 2941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:01.403283 kubelet[2941]: I0913 00:07:01.403096 2941 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:01.407621 kubelet[2941]: I0913 00:07:01.407591 2941 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:01.409242 kubelet[2941]: W0913 00:07:01.407847 2941 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:07:01.409242 kubelet[2941]: I0913 00:07:01.408525 2941 server.go:1274] "Started kubelet" Sep 13 00:07:01.409242 kubelet[2941]: W0913 00:07:01.408697 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-42&limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:01.409242 kubelet[2941]: E0913 00:07:01.408780 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-42&limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:01.435937 kubelet[2941]: I0913 00:07:01.434954 2941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:01.435937 kubelet[2941]: I0913 00:07:01.435518 2941 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:01.442505 kubelet[2941]: I0913 00:07:01.442232 2941 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:01.444457 kubelet[2941]: I0913 00:07:01.443931 2941 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:07:01.444457 kubelet[2941]: I0913 00:07:01.444285 2941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:01.449778 kubelet[2941]: I0913 00:07:01.449186 2941 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:01.449778 kubelet[2941]: W0913 00:07:01.449550 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:01.450130 kubelet[2941]: E0913 00:07:01.450012 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:01.452232 kubelet[2941]: I0913 00:07:01.452136 2941 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:07:01.452739 kubelet[2941]: E0913 00:07:01.452701 2941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-42\" not found" Sep 13 00:07:01.452942 kubelet[2941]: E0913 00:07:01.450599 2941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.42:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-42.1864aedd9bef5b91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-42,UID:ip-172-31-25-42,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-42,},FirstTimestamp:2025-09-13 00:07:01.408496529 +0000 UTC m=+0.621527255,LastTimestamp:2025-09-13 00:07:01.408496529 +0000 UTC m=+0.621527255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-42,}" Sep 13 00:07:01.464671 kubelet[2941]: I0913 00:07:01.456394 2941 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:07:01.464671 kubelet[2941]: I0913 00:07:01.456481 2941 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:01.464671 kubelet[2941]: W0913 00:07:01.463222 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:01.464671 kubelet[2941]: E0913 00:07:01.463301 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:01.464671 kubelet[2941]: E0913 00:07:01.463390 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": dial tcp 172.31.25.42:6443: connect: connection refused" interval="200ms" Sep 13 00:07:01.464671 kubelet[2941]: I0913 00:07:01.463664 2941 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:01.464671 kubelet[2941]: I0913 00:07:01.463806 2941 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:01.510740 kubelet[2941]: I0913 00:07:01.509674 2941 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:01.555696 kubelet[2941]: E0913 00:07:01.555657 2941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-42\" not found" Sep 13 00:07:01.622481 kubelet[2941]: I0913 00:07:01.622428 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:01.632995 kubelet[2941]: I0913 00:07:01.632957 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:01.632995 kubelet[2941]: I0913 00:07:01.632997 2941 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:07:01.633179 kubelet[2941]: I0913 00:07:01.633022 2941 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:07:01.633179 kubelet[2941]: E0913 00:07:01.633074 2941 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:01.644931 kubelet[2941]: W0913 00:07:01.637387 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:01.646626 kubelet[2941]: E0913 00:07:01.644907 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:01.656763 kubelet[2941]: E0913 00:07:01.656724 2941 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-42\" not found" Sep 13 00:07:01.664748 kubelet[2941]: E0913 00:07:01.664629 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": dial tcp 172.31.25.42:6443: connect: connection refused" interval="400ms" Sep 13 00:07:01.672183 kubelet[2941]: I0913 00:07:01.671961 2941 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:07:01.672183 kubelet[2941]: I0913 00:07:01.671984 2941 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:01.672183 kubelet[2941]: I0913 00:07:01.672010 2941 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:01.675298 kubelet[2941]: I0913 00:07:01.674850 2941 policy_none.go:49] "None policy: Start" Sep 13 00:07:01.677197 kubelet[2941]: I0913 00:07:01.676808 2941 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:07:01.677197 kubelet[2941]: I0913 00:07:01.676893 2941 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:01.687353 kubelet[2941]: I0913 00:07:01.687320 2941 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:01.687767 kubelet[2941]: I0913 00:07:01.687750 2941 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:01.688731 kubelet[2941]: I0913 00:07:01.687866 2941 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:01.699636 kubelet[2941]: I0913 00:07:01.699591 2941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:01.703087 kubelet[2941]: E0913 00:07:01.703059 2941 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-42\" not found" Sep 13 00:07:01.758154 kubelet[2941]: I0913 00:07:01.757113 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f59c153bfd788fbf4fd8048991288a8-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-42\" (UID: \"8f59c153bfd788fbf4fd8048991288a8\") " pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:01.758154 kubelet[2941]: I0913 00:07:01.757156 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f59c153bfd788fbf4fd8048991288a8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-42\" (UID: \"8f59c153bfd788fbf4fd8048991288a8\") " pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:01.758154 kubelet[2941]: I0913 00:07:01.757184 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f59c153bfd788fbf4fd8048991288a8-ca-certs\") pod \"kube-apiserver-ip-172-31-25-42\" (UID: \"8f59c153bfd788fbf4fd8048991288a8\") " pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:01.807367 kubelet[2941]: I0913 00:07:01.806224 2941 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:01.807367 kubelet[2941]: E0913 00:07:01.806959 2941 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.42:6443/api/v1/nodes\": dial tcp 172.31.25.42:6443: connect: connection refused" node="ip-172-31-25-42" Sep 13 00:07:01.958131 kubelet[2941]: I0913 00:07:01.957901 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:01.958131 kubelet[2941]: I0913 00:07:01.957965 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5a6a7c7c9b79408e476cb299f4d42ad-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-42\" (UID: \"c5a6a7c7c9b79408e476cb299f4d42ad\") " pod="kube-system/kube-scheduler-ip-172-31-25-42" Sep 13 00:07:01.958131 kubelet[2941]: I0913 00:07:01.957995 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:01.958131 kubelet[2941]: I0913 00:07:01.958021 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:01.958131 kubelet[2941]: I0913 00:07:01.958049 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:01.958833 kubelet[2941]: I0913 00:07:01.958075 2941 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:02.009528 kubelet[2941]: I0913 00:07:02.009491 2941 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:02.009890 kubelet[2941]: E0913 00:07:02.009855 2941 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.42:6443/api/v1/nodes\": dial tcp 172.31.25.42:6443: connect: connection refused" node="ip-172-31-25-42" Sep 13 00:07:02.065302 kubelet[2941]: E0913 00:07:02.065247 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": dial tcp 172.31.25.42:6443: connect: connection refused" interval="800ms" Sep 13 00:07:02.105398 containerd[2108]: time="2025-09-13T00:07:02.105351046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-42,Uid:8f59c153bfd788fbf4fd8048991288a8,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:02.111423 containerd[2108]: time="2025-09-13T00:07:02.111026153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-42,Uid:c5a6a7c7c9b79408e476cb299f4d42ad,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:02.111423 containerd[2108]: time="2025-09-13T00:07:02.111026435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-42,Uid:dd2f396386988f0319a130ca431916a9,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:02.416168 kubelet[2941]: I0913 00:07:02.416077 2941 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:02.416900 kubelet[2941]: E0913 00:07:02.416772 2941 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.42:6443/api/v1/nodes\": dial tcp 172.31.25.42:6443: connect: connection refused" node="ip-172-31-25-42" Sep 13 00:07:02.598670 kubelet[2941]: W0913 00:07:02.598606 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:02.598900 kubelet[2941]: E0913 00:07:02.598681 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:02.647586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535508553.mount: Deactivated successfully. Sep 13 00:07:02.668388 containerd[2108]: time="2025-09-13T00:07:02.666882325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:02.669441 containerd[2108]: time="2025-09-13T00:07:02.669383917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:02.675078 containerd[2108]: time="2025-09-13T00:07:02.675009597Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:02.679502 containerd[2108]: time="2025-09-13T00:07:02.679371642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:02.681833 containerd[2108]: time="2025-09-13T00:07:02.681599346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:02.681833 containerd[2108]: time="2025-09-13T00:07:02.681656647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:07:02.681833 containerd[2108]: time="2025-09-13T00:07:02.681781482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:02.724910 containerd[2108]: time="2025-09-13T00:07:02.724829325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:02.728231 containerd[2108]: time="2025-09-13T00:07:02.728049885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.49234ms" Sep 13 00:07:02.731748 containerd[2108]: time="2025-09-13T00:07:02.729391386Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.177646ms" Sep 13 00:07:02.732138 kubelet[2941]: W0913 00:07:02.732076 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-42&limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:02.732241 kubelet[2941]: E0913 00:07:02.732160 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-42&limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:02.735076 containerd[2108]: time="2025-09-13T00:07:02.733207180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 621.852249ms" Sep 13 00:07:02.866360 kubelet[2941]: E0913 00:07:02.866307 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": dial tcp 172.31.25.42:6443: connect: connection refused" interval="1.6s" Sep 13 00:07:02.922541 kubelet[2941]: W0913 00:07:02.922345 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:02.922541 kubelet[2941]: E0913 00:07:02.922428 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:02.953157 kubelet[2941]: W0913 00:07:02.953033 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:02.953157 kubelet[2941]: E0913 00:07:02.953113 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:02.970892 containerd[2108]: time="2025-09-13T00:07:02.970795154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:02.971324 containerd[2108]: time="2025-09-13T00:07:02.971123374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:02.971324 containerd[2108]: time="2025-09-13T00:07:02.971220535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.971978 containerd[2108]: time="2025-09-13T00:07:02.971838236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.972477 containerd[2108]: time="2025-09-13T00:07:02.968632935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:02.972477 containerd[2108]: time="2025-09-13T00:07:02.972218807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:02.972477 containerd[2108]: time="2025-09-13T00:07:02.972239091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.972477 containerd[2108]: time="2025-09-13T00:07:02.972350255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.980477 containerd[2108]: time="2025-09-13T00:07:02.980090632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:02.980477 containerd[2108]: time="2025-09-13T00:07:02.980344844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:02.980957 containerd[2108]: time="2025-09-13T00:07:02.980451778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:02.984902 containerd[2108]: time="2025-09-13T00:07:02.982839629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:03.107583 containerd[2108]: time="2025-09-13T00:07:03.107462519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-42,Uid:8f59c153bfd788fbf4fd8048991288a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3276f564a68edb36a26a08603b72c007071655d9e8a0a412041949bcee774ec0\"" Sep 13 00:07:03.113242 containerd[2108]: time="2025-09-13T00:07:03.113092995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-42,Uid:c5a6a7c7c9b79408e476cb299f4d42ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb0194975b0b27563a89499031af33e61b7146008a02159a9a4c6427c4bbb1b\"" Sep 13 00:07:03.116968 containerd[2108]: time="2025-09-13T00:07:03.116931578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-42,Uid:dd2f396386988f0319a130ca431916a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"79196e30514967e6405eb826a62d2ac904c032b9f809e4994359d79c44e2dbf5\"" Sep 13 00:07:03.119681 containerd[2108]: time="2025-09-13T00:07:03.119556206Z" level=info msg="CreateContainer within sandbox \"3276f564a68edb36a26a08603b72c007071655d9e8a0a412041949bcee774ec0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:07:03.120002 containerd[2108]: time="2025-09-13T00:07:03.119683871Z" level=info msg="CreateContainer within sandbox \"6eb0194975b0b27563a89499031af33e61b7146008a02159a9a4c6427c4bbb1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:07:03.120532 containerd[2108]: time="2025-09-13T00:07:03.120487495Z" level=info msg="CreateContainer within sandbox \"79196e30514967e6405eb826a62d2ac904c032b9f809e4994359d79c44e2dbf5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:07:03.179441 containerd[2108]: time="2025-09-13T00:07:03.179321510Z" level=info msg="CreateContainer within sandbox \"3276f564a68edb36a26a08603b72c007071655d9e8a0a412041949bcee774ec0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"deafd2ddc0021d2c093101011581ef3d8871e4a9f84eba7898a78be4ee5cdd0a\"" Sep 13 00:07:03.183748 containerd[2108]: time="2025-09-13T00:07:03.182826851Z" level=info msg="CreateContainer within sandbox \"6eb0194975b0b27563a89499031af33e61b7146008a02159a9a4c6427c4bbb1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5\"" Sep 13 00:07:03.183748 containerd[2108]: time="2025-09-13T00:07:03.183100013Z" level=info msg="StartContainer for \"deafd2ddc0021d2c093101011581ef3d8871e4a9f84eba7898a78be4ee5cdd0a\"" Sep 13 00:07:03.187545 containerd[2108]: time="2025-09-13T00:07:03.186398776Z" level=info msg="CreateContainer within sandbox \"79196e30514967e6405eb826a62d2ac904c032b9f809e4994359d79c44e2dbf5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c\"" Sep 13 00:07:03.187545 containerd[2108]: time="2025-09-13T00:07:03.186578079Z" level=info msg="StartContainer for \"bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5\"" Sep 13 00:07:03.193643 containerd[2108]: time="2025-09-13T00:07:03.193513584Z" level=info msg="StartContainer for \"3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c\"" Sep 13 00:07:03.220252 kubelet[2941]: I0913 00:07:03.220220 2941 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:03.220629 kubelet[2941]: E0913 00:07:03.220523 2941 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.42:6443/api/v1/nodes\": dial tcp 172.31.25.42:6443: connect: connection refused" node="ip-172-31-25-42" Sep 13 00:07:03.300609 containerd[2108]: time="2025-09-13T00:07:03.300374702Z" level=info msg="StartContainer for \"deafd2ddc0021d2c093101011581ef3d8871e4a9f84eba7898a78be4ee5cdd0a\" returns successfully" Sep 13 00:07:03.308483 containerd[2108]: time="2025-09-13T00:07:03.308449220Z" level=info msg="StartContainer for \"3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c\" returns successfully" Sep 13 00:07:03.334472 containerd[2108]: time="2025-09-13T00:07:03.334421309Z" level=info msg="StartContainer for \"bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5\" returns successfully" Sep 13 00:07:03.424331 kubelet[2941]: E0913 00:07:03.424281 2941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:04.364097 kubelet[2941]: W0913 00:07:04.363981 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:04.364097 kubelet[2941]: E0913 00:07:04.364066 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:04.467334 kubelet[2941]: E0913 00:07:04.467279 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": dial tcp 172.31.25.42:6443: connect: connection refused" interval="3.2s" Sep 13 00:07:04.704641 kubelet[2941]: W0913 00:07:04.704567 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:04.704903 kubelet[2941]: E0913 00:07:04.704655 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:04.823178 kubelet[2941]: I0913 00:07:04.822697 2941 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:04.823178 kubelet[2941]: E0913 00:07:04.823052 2941 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.42:6443/api/v1/nodes\": dial tcp 172.31.25.42:6443: connect: connection refused" node="ip-172-31-25-42" Sep 13 00:07:04.921462 kubelet[2941]: W0913 00:07:04.921290 2941 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.42:6443: connect: connection refused Sep 13 00:07:04.921462 kubelet[2941]: E0913 00:07:04.921374 2941 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.42:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:06.437058 kubelet[2941]: I0913 00:07:06.437009 2941 apiserver.go:52] "Watching apiserver" Sep 13 00:07:06.457323 kubelet[2941]: I0913 00:07:06.457286 2941 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:07:06.622585 kubelet[2941]: E0913 00:07:06.622554 2941 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-42" not found Sep 13 00:07:06.986759 kubelet[2941]: E0913 00:07:06.986725 2941 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-42" not found Sep 13 00:07:07.407956 kubelet[2941]: E0913 00:07:07.407922 2941 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-42" not found Sep 13 00:07:07.671996 kubelet[2941]: E0913 00:07:07.671862 2941 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-42\" not found" node="ip-172-31-25-42" Sep 13 00:07:08.026030 kubelet[2941]: I0913 00:07:08.025498 2941 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:08.033860 kubelet[2941]: I0913 00:07:08.033803 2941 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-42" Sep 13 00:07:08.320076 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:07:08.465895 systemd[1]: Reloading requested from client PID 3224 ('systemctl') (unit session-7.scope)... Sep 13 00:07:08.465913 systemd[1]: Reloading... Sep 13 00:07:08.625785 zram_generator::config[3264]: No configuration found. Sep 13 00:07:08.785768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:08.888859 systemd[1]: Reloading finished in 422 ms. Sep 13 00:07:08.925049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:08.938924 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:08.939278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:08.950096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:09.225944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:09.226795 (kubelet)[3333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:09.300853 kubelet[3333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:09.300853 kubelet[3333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:09.300853 kubelet[3333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:09.300853 kubelet[3333]: I0913 00:07:09.294609 3333 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:09.310823 kubelet[3333]: I0913 00:07:09.310502 3333 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:07:09.310823 kubelet[3333]: I0913 00:07:09.310528 3333 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:09.313882 kubelet[3333]: I0913 00:07:09.313851 3333 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:07:09.315617 kubelet[3333]: I0913 00:07:09.315598 3333 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:07:09.329986 kubelet[3333]: I0913 00:07:09.329959 3333 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:09.334830 kubelet[3333]: E0913 00:07:09.334767 3333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:09.334830 kubelet[3333]: I0913 00:07:09.334801 3333 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:09.344383 kubelet[3333]: I0913 00:07:09.343088 3333 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:09.344383 kubelet[3333]: I0913 00:07:09.343626 3333 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:07:09.344383 kubelet[3333]: I0913 00:07:09.343812 3333 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:09.344383 kubelet[3333]: I0913 00:07:09.343855 3333 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-42","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344219 3333 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344233 3333 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344267 3333 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344381 3333 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344395 3333 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344430 3333 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:07:09.344750 kubelet[3333]: I0913 00:07:09.344444 3333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:09.351507 kubelet[3333]: I0913 00:07:09.351452 3333 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:09.354045 kubelet[3333]: I0913 00:07:09.353970 3333 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:09.356363 kubelet[3333]: I0913 00:07:09.356223 3333 server.go:1274] "Started kubelet" Sep 13 00:07:09.361806 kubelet[3333]: I0913 00:07:09.361001 3333 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:09.362323 kubelet[3333]: I0913 00:07:09.362250 3333 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:07:09.366902 kubelet[3333]: I0913 00:07:09.366205 3333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:09.366902 kubelet[3333]: I0913 00:07:09.366432 3333 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:09.367675 kubelet[3333]: I0913 00:07:09.367644 3333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:09.372788 kubelet[3333]: I0913 00:07:09.372427 3333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:09.375041 kubelet[3333]: I0913 00:07:09.374408 3333 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:07:09.375041 kubelet[3333]: E0913 00:07:09.374683 3333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-42\" not found" Sep 13 00:07:09.390187 kubelet[3333]: I0913 00:07:09.390161 3333 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:09.390567 kubelet[3333]: I0913 00:07:09.390544 3333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:09.395785 kubelet[3333]: I0913 00:07:09.395166 3333 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:07:09.397244 kubelet[3333]: I0913 00:07:09.397222 3333 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:09.399186 kubelet[3333]: I0913 00:07:09.397926 3333 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:09.409874 kubelet[3333]: I0913 00:07:09.409787 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:09.412867 kubelet[3333]: I0913 00:07:09.411237 3333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:09.412867 kubelet[3333]: I0913 00:07:09.411270 3333 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:07:09.412867 kubelet[3333]: I0913 00:07:09.411295 3333 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:07:09.412867 kubelet[3333]: E0913 00:07:09.411342 3333 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:09.484571 kubelet[3333]: I0913 00:07:09.484460 3333 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:07:09.484571 kubelet[3333]: I0913 00:07:09.484484 3333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:09.484571 kubelet[3333]: I0913 00:07:09.484505 3333 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:09.485829 kubelet[3333]: I0913 00:07:09.484678 3333 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:07:09.485829 kubelet[3333]: I0913 00:07:09.484692 3333 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:07:09.485829 kubelet[3333]: I0913 00:07:09.484843 3333 policy_none.go:49] "None policy: Start" Sep 13 00:07:09.485829 kubelet[3333]: I0913 00:07:09.485515 3333 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:07:09.485829 kubelet[3333]: I0913 00:07:09.485537 3333 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:09.486851 kubelet[3333]: I0913 00:07:09.486818 3333 state_mem.go:75] "Updated machine memory state" Sep 13 00:07:09.490011 kubelet[3333]: I0913 00:07:09.489742 3333 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:09.490011 kubelet[3333]: I0913 00:07:09.489943 3333 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:09.490011 kubelet[3333]: I0913 00:07:09.489955 3333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:09.494532 kubelet[3333]: I0913 00:07:09.493558 3333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:09.600784 kubelet[3333]: I0913 00:07:09.599810 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f59c153bfd788fbf4fd8048991288a8-ca-certs\") pod \"kube-apiserver-ip-172-31-25-42\" (UID: \"8f59c153bfd788fbf4fd8048991288a8\") " pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:09.600784 kubelet[3333]: I0913 00:07:09.599862 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f59c153bfd788fbf4fd8048991288a8-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-42\" (UID: \"8f59c153bfd788fbf4fd8048991288a8\") " pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:09.600784 kubelet[3333]: I0913 00:07:09.599903 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f59c153bfd788fbf4fd8048991288a8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-42\" (UID: \"8f59c153bfd788fbf4fd8048991288a8\") " pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:09.600784 kubelet[3333]: I0913 00:07:09.599934 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:09.600784 kubelet[3333]: I0913 00:07:09.599958 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:09.601093 kubelet[3333]: I0913 00:07:09.599982 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:09.601093 kubelet[3333]: I0913 00:07:09.600008 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5a6a7c7c9b79408e476cb299f4d42ad-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-42\" (UID: \"c5a6a7c7c9b79408e476cb299f4d42ad\") " pod="kube-system/kube-scheduler-ip-172-31-25-42" Sep 13 00:07:09.601093 kubelet[3333]: I0913 00:07:09.600033 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:09.601093 kubelet[3333]: I0913 00:07:09.600057 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd2f396386988f0319a130ca431916a9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-42\" (UID: \"dd2f396386988f0319a130ca431916a9\") " pod="kube-system/kube-controller-manager-ip-172-31-25-42" Sep 13 00:07:09.602544 kubelet[3333]: I0913 00:07:09.602517 3333 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-42" Sep 13 00:07:09.616007 kubelet[3333]: I0913 00:07:09.615468 3333 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-25-42" Sep 13 00:07:09.616007 kubelet[3333]: I0913 00:07:09.615596 3333 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-42" Sep 13 00:07:10.362869 kubelet[3333]: I0913 00:07:10.361456 3333 apiserver.go:52] "Watching apiserver" Sep 13 00:07:10.396534 kubelet[3333]: I0913 00:07:10.396450 3333 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:07:10.458079 kubelet[3333]: E0913 00:07:10.457917 3333 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-25-42\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-42" Sep 13 00:07:10.537929 kubelet[3333]: I0913 00:07:10.537533 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-42" podStartSLOduration=1.537510669 podStartE2EDuration="1.537510669s" podCreationTimestamp="2025-09-13 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:10.514826644 +0000 UTC m=+1.275842532" watchObservedRunningTime="2025-09-13 00:07:10.537510669 +0000 UTC m=+1.298526558" Sep 13 00:07:10.574120 kubelet[3333]: I0913 00:07:10.573289 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-42" podStartSLOduration=1.573269257 podStartE2EDuration="1.573269257s" podCreationTimestamp="2025-09-13 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:10.539059689 +0000 UTC m=+1.300075575" watchObservedRunningTime="2025-09-13 00:07:10.573269257 +0000 UTC m=+1.334285147" Sep 13 00:07:10.574120 kubelet[3333]: I0913 00:07:10.573429 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-42" podStartSLOduration=1.5734199260000001 podStartE2EDuration="1.573419926s" podCreationTimestamp="2025-09-13 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:10.572259028 +0000 UTC m=+1.333274918" watchObservedRunningTime="2025-09-13 00:07:10.573419926 +0000 UTC m=+1.334435816" Sep 13 00:07:13.032408 kubelet[3333]: I0913 00:07:13.032378 3333 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:07:13.032862 containerd[2108]: time="2025-09-13T00:07:13.032682746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:13.033187 kubelet[3333]: I0913 00:07:13.032860 3333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:07:13.932056 kubelet[3333]: I0913 00:07:13.932003 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e767c64-57b7-47a1-89b1-302cc452af5c-kube-proxy\") pod \"kube-proxy-stlwf\" (UID: \"4e767c64-57b7-47a1-89b1-302cc452af5c\") " pod="kube-system/kube-proxy-stlwf" Sep 13 00:07:13.932056 kubelet[3333]: I0913 00:07:13.932052 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e767c64-57b7-47a1-89b1-302cc452af5c-lib-modules\") pod \"kube-proxy-stlwf\" (UID: \"4e767c64-57b7-47a1-89b1-302cc452af5c\") " pod="kube-system/kube-proxy-stlwf" Sep 13 00:07:13.932408 kubelet[3333]: I0913 00:07:13.932083 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbw2s\" (UniqueName: \"kubernetes.io/projected/4e767c64-57b7-47a1-89b1-302cc452af5c-kube-api-access-fbw2s\") pod \"kube-proxy-stlwf\" (UID: \"4e767c64-57b7-47a1-89b1-302cc452af5c\") " pod="kube-system/kube-proxy-stlwf" Sep 13 00:07:13.932408 kubelet[3333]: I0913 00:07:13.932105 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e767c64-57b7-47a1-89b1-302cc452af5c-xtables-lock\") pod \"kube-proxy-stlwf\" (UID: \"4e767c64-57b7-47a1-89b1-302cc452af5c\") " pod="kube-system/kube-proxy-stlwf" Sep 13 00:07:14.138473 containerd[2108]: time="2025-09-13T00:07:14.138398452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stlwf,Uid:4e767c64-57b7-47a1-89b1-302cc452af5c,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:14.203350 containerd[2108]: time="2025-09-13T00:07:14.202719271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:14.203350 containerd[2108]: time="2025-09-13T00:07:14.202776328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:14.203546 containerd[2108]: time="2025-09-13T00:07:14.202791696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:14.203546 containerd[2108]: time="2025-09-13T00:07:14.202876426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:14.237895 kubelet[3333]: I0913 00:07:14.237515 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k776g\" (UniqueName: \"kubernetes.io/projected/8b15c8f7-a3b3-439f-987e-21e1c01c1dd8-kube-api-access-k776g\") pod \"tigera-operator-58fc44c59b-cnj9l\" (UID: \"8b15c8f7-a3b3-439f-987e-21e1c01c1dd8\") " pod="tigera-operator/tigera-operator-58fc44c59b-cnj9l" Sep 13 00:07:14.237895 kubelet[3333]: I0913 00:07:14.237569 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b15c8f7-a3b3-439f-987e-21e1c01c1dd8-var-lib-calico\") pod \"tigera-operator-58fc44c59b-cnj9l\" (UID: \"8b15c8f7-a3b3-439f-987e-21e1c01c1dd8\") " pod="tigera-operator/tigera-operator-58fc44c59b-cnj9l" Sep 13 00:07:14.259364 containerd[2108]: time="2025-09-13T00:07:14.259319457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stlwf,Uid:4e767c64-57b7-47a1-89b1-302cc452af5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"51740ec4242bf756233f44c949576fe6d0beaffb8dbc0cfdb0a75e435cf65b84\"" Sep 13 00:07:14.263329 containerd[2108]: time="2025-09-13T00:07:14.263272154Z" level=info msg="CreateContainer within sandbox \"51740ec4242bf756233f44c949576fe6d0beaffb8dbc0cfdb0a75e435cf65b84\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:14.282135 containerd[2108]: time="2025-09-13T00:07:14.282083746Z" level=info msg="CreateContainer within sandbox \"51740ec4242bf756233f44c949576fe6d0beaffb8dbc0cfdb0a75e435cf65b84\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66dc8a6492a30f43884bb2d6142cc29bd99c5349baeb14230be87b1e77d02e60\"" Sep 13 00:07:14.282780 containerd[2108]: time="2025-09-13T00:07:14.282750718Z" level=info msg="StartContainer for \"66dc8a6492a30f43884bb2d6142cc29bd99c5349baeb14230be87b1e77d02e60\"" Sep 13 00:07:14.340961 containerd[2108]: time="2025-09-13T00:07:14.340815853Z" level=info msg="StartContainer for \"66dc8a6492a30f43884bb2d6142cc29bd99c5349baeb14230be87b1e77d02e60\" returns successfully" Sep 13 00:07:14.503874 containerd[2108]: time="2025-09-13T00:07:14.503769560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-cnj9l,Uid:8b15c8f7-a3b3-439f-987e-21e1c01c1dd8,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:07:14.533667 containerd[2108]: time="2025-09-13T00:07:14.533240694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:14.533667 containerd[2108]: time="2025-09-13T00:07:14.533327054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:14.533667 containerd[2108]: time="2025-09-13T00:07:14.533428866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:14.533667 containerd[2108]: time="2025-09-13T00:07:14.533545239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:14.598408 containerd[2108]: time="2025-09-13T00:07:14.598377482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-cnj9l,Uid:8b15c8f7-a3b3-439f-987e-21e1c01c1dd8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"11476d4c949e2da569234c475506707b7011b90d15878823a1547272b2fd026a\"" Sep 13 00:07:14.601754 containerd[2108]: time="2025-09-13T00:07:14.601633348Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:07:15.048282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513761844.mount: Deactivated successfully. Sep 13 00:07:15.203005 kubelet[3333]: I0913 00:07:15.202917 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stlwf" podStartSLOduration=2.202899249 podStartE2EDuration="2.202899249s" podCreationTimestamp="2025-09-13 00:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:14.468098455 +0000 UTC m=+5.229114340" watchObservedRunningTime="2025-09-13 00:07:15.202899249 +0000 UTC m=+5.963915118" Sep 13 00:07:16.077504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876178425.mount: Deactivated successfully. Sep 13 00:07:16.824390 containerd[2108]: time="2025-09-13T00:07:16.824339249Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.825889 containerd[2108]: time="2025-09-13T00:07:16.825721974Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:07:16.828146 containerd[2108]: time="2025-09-13T00:07:16.827057877Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.829915 containerd[2108]: time="2025-09-13T00:07:16.829140320Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.829915 containerd[2108]: time="2025-09-13T00:07:16.829804838Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.228132121s" Sep 13 00:07:16.829915 containerd[2108]: time="2025-09-13T00:07:16.829831785Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:07:16.852131 containerd[2108]: time="2025-09-13T00:07:16.852092357Z" level=info msg="CreateContainer within sandbox \"11476d4c949e2da569234c475506707b7011b90d15878823a1547272b2fd026a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:07:16.873341 containerd[2108]: time="2025-09-13T00:07:16.873274140Z" level=info msg="CreateContainer within sandbox \"11476d4c949e2da569234c475506707b7011b90d15878823a1547272b2fd026a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e\"" Sep 13 00:07:16.874561 containerd[2108]: time="2025-09-13T00:07:16.873987645Z" level=info msg="StartContainer for \"249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e\"" Sep 13 00:07:16.938072 containerd[2108]: time="2025-09-13T00:07:16.938028314Z" level=info msg="StartContainer for \"249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e\" returns successfully" Sep 13 00:07:17.077948 systemd[1]: run-containerd-runc-k8s.io-249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e-runc.54BYP9.mount: Deactivated successfully. Sep 13 00:07:17.500041 kubelet[3333]: I0913 00:07:17.499946 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-cnj9l" podStartSLOduration=1.263591353 podStartE2EDuration="3.499931192s" podCreationTimestamp="2025-09-13 00:07:14 +0000 UTC" firstStartedPulling="2025-09-13 00:07:14.599903219 +0000 UTC m=+5.360919096" lastFinishedPulling="2025-09-13 00:07:16.836243055 +0000 UTC m=+7.597258935" observedRunningTime="2025-09-13 00:07:17.497574561 +0000 UTC m=+8.258590449" watchObservedRunningTime="2025-09-13 00:07:17.499931192 +0000 UTC m=+8.260947079" Sep 13 00:07:22.372728 update_engine[2078]: I20250913 00:07:22.372637 2078 update_attempter.cc:509] Updating boot flags... Sep 13 00:07:22.486749 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3709) Sep 13 00:07:22.645743 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3710) Sep 13 00:07:22.757753 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3710) Sep 13 00:07:23.713249 sudo[2439]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:23.738424 sshd[2435]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:23.757682 systemd-logind[2075]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:07:23.759539 systemd[1]: sshd@6-172.31.25.42:22-139.178.89.65:33052.service: Deactivated successfully. Sep 13 00:07:23.776199 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:07:23.782456 systemd-logind[2075]: Removed session 7. Sep 13 00:07:28.045893 kubelet[3333]: I0913 00:07:28.045843 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34-typha-certs\") pod \"calico-typha-78b67fb446-zdht9\" (UID: \"c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34\") " pod="calico-system/calico-typha-78b67fb446-zdht9" Sep 13 00:07:28.045893 kubelet[3333]: I0913 00:07:28.045894 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34-tigera-ca-bundle\") pod \"calico-typha-78b67fb446-zdht9\" (UID: \"c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34\") " pod="calico-system/calico-typha-78b67fb446-zdht9" Sep 13 00:07:28.046532 kubelet[3333]: I0913 00:07:28.045920 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5rc\" (UniqueName: \"kubernetes.io/projected/c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34-kube-api-access-cd5rc\") pod \"calico-typha-78b67fb446-zdht9\" (UID: \"c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34\") " pod="calico-system/calico-typha-78b67fb446-zdht9" Sep 13 00:07:28.246367 kubelet[3333]: I0913 00:07:28.246306 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a8683ea-a5d8-40c7-8c4a-74b49091100d-tigera-ca-bundle\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.246793 kubelet[3333]: I0913 00:07:28.246398 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-var-lib-calico\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.246793 kubelet[3333]: I0913 00:07:28.246422 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-xtables-lock\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.246793 kubelet[3333]: I0913 00:07:28.246446 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srmz5\" (UniqueName: \"kubernetes.io/projected/3a8683ea-a5d8-40c7-8c4a-74b49091100d-kube-api-access-srmz5\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.246793 kubelet[3333]: I0913 00:07:28.246474 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-cni-bin-dir\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.246793 kubelet[3333]: I0913 00:07:28.246496 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3a8683ea-a5d8-40c7-8c4a-74b49091100d-node-certs\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.247039 kubelet[3333]: I0913 00:07:28.246520 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-lib-modules\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.247039 kubelet[3333]: I0913 00:07:28.246543 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-cni-log-dir\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.247039 kubelet[3333]: I0913 00:07:28.246588 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-cni-net-dir\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.247039 kubelet[3333]: I0913 00:07:28.246616 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-flexvol-driver-host\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.247039 kubelet[3333]: I0913 00:07:28.246642 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-policysync\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.248982 kubelet[3333]: I0913 00:07:28.246663 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3a8683ea-a5d8-40c7-8c4a-74b49091100d-var-run-calico\") pod \"calico-node-cgjf4\" (UID: \"3a8683ea-a5d8-40c7-8c4a-74b49091100d\") " pod="calico-system/calico-node-cgjf4" Sep 13 00:07:28.329792 containerd[2108]: time="2025-09-13T00:07:28.329652156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b67fb446-zdht9,Uid:c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:28.352466 kubelet[3333]: E0913 00:07:28.352324 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.352466 kubelet[3333]: W0913 00:07:28.352361 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.353522 kubelet[3333]: E0913 00:07:28.353428 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.375318 kubelet[3333]: E0913 00:07:28.374609 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.375318 kubelet[3333]: W0913 00:07:28.374636 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.375318 kubelet[3333]: E0913 00:07:28.374663 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.389187 kubelet[3333]: E0913 00:07:28.382825 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.389187 kubelet[3333]: W0913 00:07:28.382849 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.389187 kubelet[3333]: E0913 00:07:28.382876 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.427205 containerd[2108]: time="2025-09-13T00:07:28.427093821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:28.427205 containerd[2108]: time="2025-09-13T00:07:28.427166301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:28.427484 containerd[2108]: time="2025-09-13T00:07:28.427187079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.427484 containerd[2108]: time="2025-09-13T00:07:28.427302928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.530758 containerd[2108]: time="2025-09-13T00:07:28.529927787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cgjf4,Uid:3a8683ea-a5d8-40c7-8c4a-74b49091100d,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:28.538242 kubelet[3333]: E0913 00:07:28.538189 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:28.538408 kubelet[3333]: E0913 00:07:28.538294 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.538408 kubelet[3333]: W0913 00:07:28.538311 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.538408 kubelet[3333]: E0913 00:07:28.538339 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.539388 kubelet[3333]: E0913 00:07:28.538609 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.539388 kubelet[3333]: W0913 00:07:28.538621 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.539388 kubelet[3333]: E0913 00:07:28.538636 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.539888 kubelet[3333]: E0913 00:07:28.539776 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.539888 kubelet[3333]: W0913 00:07:28.539793 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.539888 kubelet[3333]: E0913 00:07:28.539811 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540035 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.540865 kubelet[3333]: W0913 00:07:28.540046 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540058 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540279 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.540865 kubelet[3333]: W0913 00:07:28.540289 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540300 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540470 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.540865 kubelet[3333]: W0913 00:07:28.540479 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540491 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.540865 kubelet[3333]: E0913 00:07:28.540666 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.542154 kubelet[3333]: W0913 00:07:28.540674 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.542154 kubelet[3333]: E0913 00:07:28.540685 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.542154 kubelet[3333]: E0913 00:07:28.540893 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.542154 kubelet[3333]: W0913 00:07:28.540904 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.542154 kubelet[3333]: E0913 00:07:28.540915 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.542154 kubelet[3333]: E0913 00:07:28.541106 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.542154 kubelet[3333]: W0913 00:07:28.541116 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.542154 kubelet[3333]: E0913 00:07:28.541129 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.542892 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.544943 kubelet[3333]: W0913 00:07:28.542905 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.542920 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.543115 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.544943 kubelet[3333]: W0913 00:07:28.543125 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.543137 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.543326 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.544943 kubelet[3333]: W0913 00:07:28.543336 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.543348 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.544943 kubelet[3333]: E0913 00:07:28.543539 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.546477 kubelet[3333]: W0913 00:07:28.543549 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.546477 kubelet[3333]: E0913 00:07:28.543560 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.546477 kubelet[3333]: E0913 00:07:28.544066 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.546477 kubelet[3333]: W0913 00:07:28.544076 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.546477 kubelet[3333]: E0913 00:07:28.544089 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.546477 kubelet[3333]: E0913 00:07:28.544328 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.546477 kubelet[3333]: W0913 00:07:28.544339 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.546477 kubelet[3333]: E0913 00:07:28.544351 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.546477 kubelet[3333]: E0913 00:07:28.544546 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.546477 kubelet[3333]: W0913 00:07:28.544555 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.544566 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.544799 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.549560 kubelet[3333]: W0913 00:07:28.544812 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.544824 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.545020 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.549560 kubelet[3333]: W0913 00:07:28.545029 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.545039 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.545303 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.549560 kubelet[3333]: W0913 00:07:28.545314 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.549560 kubelet[3333]: E0913 00:07:28.545326 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.552618 kubelet[3333]: E0913 00:07:28.545524 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.552618 kubelet[3333]: W0913 00:07:28.545533 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.552618 kubelet[3333]: E0913 00:07:28.545544 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.556730 kubelet[3333]: E0913 00:07:28.555339 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.556730 kubelet[3333]: W0913 00:07:28.555362 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.556730 kubelet[3333]: E0913 00:07:28.555389 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.556730 kubelet[3333]: I0913 00:07:28.555424 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3735d737-0352-445e-b2c0-da8688517912-kubelet-dir\") pod \"csi-node-driver-6ztb2\" (UID: \"3735d737-0352-445e-b2c0-da8688517912\") " pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:28.557055 kubelet[3333]: E0913 00:07:28.556980 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.557055 kubelet[3333]: W0913 00:07:28.557016 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.557179 kubelet[3333]: E0913 00:07:28.557047 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.557179 kubelet[3333]: I0913 00:07:28.557089 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3735d737-0352-445e-b2c0-da8688517912-registration-dir\") pod \"csi-node-driver-6ztb2\" (UID: \"3735d737-0352-445e-b2c0-da8688517912\") " pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:28.561894 kubelet[3333]: E0913 00:07:28.557416 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.561894 kubelet[3333]: W0913 00:07:28.557430 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.561894 kubelet[3333]: E0913 00:07:28.557461 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.561894 kubelet[3333]: I0913 00:07:28.557484 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3735d737-0352-445e-b2c0-da8688517912-varrun\") pod \"csi-node-driver-6ztb2\" (UID: \"3735d737-0352-445e-b2c0-da8688517912\") " pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:28.561894 kubelet[3333]: E0913 00:07:28.558051 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.561894 kubelet[3333]: W0913 00:07:28.558064 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.561894 kubelet[3333]: E0913 00:07:28.558084 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.561894 kubelet[3333]: E0913 00:07:28.558889 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.561894 kubelet[3333]: W0913 00:07:28.558901 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.562327 kubelet[3333]: E0913 00:07:28.558983 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.562327 kubelet[3333]: E0913 00:07:28.559173 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.562327 kubelet[3333]: W0913 00:07:28.559197 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.562327 kubelet[3333]: E0913 00:07:28.559295 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.562327 kubelet[3333]: E0913 00:07:28.560755 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.562327 kubelet[3333]: W0913 00:07:28.560769 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.562327 kubelet[3333]: E0913 00:07:28.560858 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.562327 kubelet[3333]: I0913 00:07:28.560886 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3735d737-0352-445e-b2c0-da8688517912-socket-dir\") pod \"csi-node-driver-6ztb2\" (UID: \"3735d737-0352-445e-b2c0-da8688517912\") " pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:28.562327 kubelet[3333]: E0913 00:07:28.561085 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.562648 kubelet[3333]: W0913 00:07:28.561095 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.562648 kubelet[3333]: E0913 00:07:28.561449 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.562648 kubelet[3333]: E0913 00:07:28.561517 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.562648 kubelet[3333]: W0913 00:07:28.561527 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.562648 kubelet[3333]: E0913 00:07:28.561540 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.562648 kubelet[3333]: E0913 00:07:28.561821 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.562648 kubelet[3333]: W0913 00:07:28.561831 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.562648 kubelet[3333]: E0913 00:07:28.561857 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.562648 kubelet[3333]: E0913 00:07:28.562070 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.562648 kubelet[3333]: W0913 00:07:28.562080 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.563092 kubelet[3333]: E0913 00:07:28.562092 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.563092 kubelet[3333]: E0913 00:07:28.562359 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.563092 kubelet[3333]: W0913 00:07:28.562370 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.563092 kubelet[3333]: E0913 00:07:28.562394 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.563092 kubelet[3333]: I0913 00:07:28.562419 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxr5\" (UniqueName: \"kubernetes.io/projected/3735d737-0352-445e-b2c0-da8688517912-kube-api-access-6vxr5\") pod \"csi-node-driver-6ztb2\" (UID: \"3735d737-0352-445e-b2c0-da8688517912\") " pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:28.563092 kubelet[3333]: E0913 00:07:28.562980 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.563092 kubelet[3333]: W0913 00:07:28.562994 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.563092 kubelet[3333]: E0913 00:07:28.563008 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.566679 kubelet[3333]: E0913 00:07:28.563386 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.566679 kubelet[3333]: W0913 00:07:28.563403 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.566679 kubelet[3333]: E0913 00:07:28.563416 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.566679 kubelet[3333]: E0913 00:07:28.563854 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.566679 kubelet[3333]: W0913 00:07:28.563866 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.566679 kubelet[3333]: E0913 00:07:28.563878 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.636372 containerd[2108]: time="2025-09-13T00:07:28.629676910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:28.636372 containerd[2108]: time="2025-09-13T00:07:28.629875429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:28.636372 containerd[2108]: time="2025-09-13T00:07:28.629893785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.636372 containerd[2108]: time="2025-09-13T00:07:28.630015225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.636372 containerd[2108]: time="2025-09-13T00:07:28.635931939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b67fb446-zdht9,Uid:c5b33039-1d76-4ae7-83d6-4c6bbf3bcd34,Namespace:calico-system,Attempt:0,} returns sandbox id \"e215f8514333b34c5664fa358bae1ee1cf6b91824db2ab2a0a4329d70eb75993\"" Sep 13 00:07:28.644733 containerd[2108]: time="2025-09-13T00:07:28.644161000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:07:28.664975 kubelet[3333]: E0913 00:07:28.664335 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.664975 kubelet[3333]: W0913 00:07:28.664371 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.664975 kubelet[3333]: E0913 00:07:28.664396 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.664975 kubelet[3333]: E0913 00:07:28.664702 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.664975 kubelet[3333]: W0913 00:07:28.664753 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.664975 kubelet[3333]: E0913 00:07:28.664808 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.665408 kubelet[3333]: E0913 00:07:28.665092 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.665408 kubelet[3333]: W0913 00:07:28.665144 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.665491 kubelet[3333]: E0913 00:07:28.665443 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.665560 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.667590 kubelet[3333]: W0913 00:07:28.665572 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.665610 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.665920 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.667590 kubelet[3333]: W0913 00:07:28.665931 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.665969 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.666271 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.667590 kubelet[3333]: W0913 00:07:28.666282 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.666298 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.667590 kubelet[3333]: E0913 00:07:28.666611 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.669189 kubelet[3333]: W0913 00:07:28.666622 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.669189 kubelet[3333]: E0913 00:07:28.666729 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.669189 kubelet[3333]: E0913 00:07:28.666928 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.669189 kubelet[3333]: W0913 00:07:28.666952 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.669189 kubelet[3333]: E0913 00:07:28.666968 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.669189 kubelet[3333]: E0913 00:07:28.667298 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.669189 kubelet[3333]: W0913 00:07:28.667359 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.669189 kubelet[3333]: E0913 00:07:28.667547 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.669189 kubelet[3333]: E0913 00:07:28.668108 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.669189 kubelet[3333]: W0913 00:07:28.668121 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.668687 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.669193 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.675138 kubelet[3333]: W0913 00:07:28.669267 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.669462 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.670070 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.675138 kubelet[3333]: W0913 00:07:28.670082 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.670262 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.670781 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.675138 kubelet[3333]: W0913 00:07:28.670793 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.675138 kubelet[3333]: E0913 00:07:28.670967 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.675532 kubelet[3333]: E0913 00:07:28.671390 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.675532 kubelet[3333]: W0913 00:07:28.671402 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.675532 kubelet[3333]: E0913 00:07:28.671806 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.675532 kubelet[3333]: E0913 00:07:28.672827 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.675532 kubelet[3333]: W0913 00:07:28.672840 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.675532 kubelet[3333]: E0913 00:07:28.673104 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.675532 kubelet[3333]: E0913 00:07:28.673630 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.675532 kubelet[3333]: W0913 00:07:28.673643 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.677884 kubelet[3333]: E0913 00:07:28.677749 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.680483 kubelet[3333]: E0913 00:07:28.680463 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.680835 kubelet[3333]: W0913 00:07:28.680594 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.682900 kubelet[3333]: E0913 00:07:28.681291 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.682900 kubelet[3333]: W0913 00:07:28.681308 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.682900 kubelet[3333]: E0913 00:07:28.681846 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.682900 kubelet[3333]: W0913 00:07:28.681858 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.683234 kubelet[3333]: E0913 00:07:28.683143 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.683234 kubelet[3333]: W0913 00:07:28.683157 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.683234 kubelet[3333]: E0913 00:07:28.683175 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.685101 kubelet[3333]: E0913 00:07:28.685082 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.685424 kubelet[3333]: E0913 00:07:28.685401 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.685641 kubelet[3333]: E0913 00:07:28.685619 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.686118 kubelet[3333]: E0913 00:07:28.686007 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.686228 kubelet[3333]: W0913 00:07:28.686206 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.686478 kubelet[3333]: E0913 00:07:28.686435 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.687558 kubelet[3333]: E0913 00:07:28.687544 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.687672 kubelet[3333]: W0913 00:07:28.687649 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.687942 kubelet[3333]: E0913 00:07:28.687876 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.688876 kubelet[3333]: E0913 00:07:28.688780 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.689029 kubelet[3333]: W0913 00:07:28.688992 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.689720 kubelet[3333]: E0913 00:07:28.689392 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.690347 kubelet[3333]: E0913 00:07:28.690229 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.690775 kubelet[3333]: W0913 00:07:28.690523 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.691146 kubelet[3333]: E0913 00:07:28.690570 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.691731 kubelet[3333]: E0913 00:07:28.691701 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.692258 kubelet[3333]: W0913 00:07:28.691934 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.692258 kubelet[3333]: E0913 00:07:28.691956 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.711771 kubelet[3333]: E0913 00:07:28.711691 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:28.712000 kubelet[3333]: W0913 00:07:28.711937 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:28.712000 kubelet[3333]: E0913 00:07:28.711962 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:28.759851 containerd[2108]: time="2025-09-13T00:07:28.759806756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cgjf4,Uid:3a8683ea-a5d8-40c7-8c4a-74b49091100d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\"" Sep 13 00:07:30.412255 kubelet[3333]: E0913 00:07:30.412211 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:30.567249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849104935.mount: Deactivated successfully. Sep 13 00:07:31.558133 containerd[2108]: time="2025-09-13T00:07:31.558021247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.560642 containerd[2108]: time="2025-09-13T00:07:31.560205822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:07:31.564750 containerd[2108]: time="2025-09-13T00:07:31.564154802Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.588932 containerd[2108]: time="2025-09-13T00:07:31.584001900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.593184 containerd[2108]: time="2025-09-13T00:07:31.593130131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.948924191s" Sep 13 00:07:31.593184 containerd[2108]: time="2025-09-13T00:07:31.593185098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:07:31.600539 containerd[2108]: time="2025-09-13T00:07:31.598858025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:07:31.625077 containerd[2108]: time="2025-09-13T00:07:31.625033727Z" level=info msg="CreateContainer within sandbox \"e215f8514333b34c5664fa358bae1ee1cf6b91824db2ab2a0a4329d70eb75993\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:07:31.654731 containerd[2108]: time="2025-09-13T00:07:31.652900062Z" level=info msg="CreateContainer within sandbox \"e215f8514333b34c5664fa358bae1ee1cf6b91824db2ab2a0a4329d70eb75993\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5770a7852219c022bab7a60bc4e25fff196a7c10f7e34f5a2a610383a7d77952\"" Sep 13 00:07:31.654731 containerd[2108]: time="2025-09-13T00:07:31.653692532Z" level=info msg="StartContainer for \"5770a7852219c022bab7a60bc4e25fff196a7c10f7e34f5a2a610383a7d77952\"" Sep 13 00:07:31.765978 containerd[2108]: time="2025-09-13T00:07:31.765856798Z" level=info msg="StartContainer for \"5770a7852219c022bab7a60bc4e25fff196a7c10f7e34f5a2a610383a7d77952\" returns successfully" Sep 13 00:07:32.411882 kubelet[3333]: E0913 00:07:32.411819 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:32.676239 kubelet[3333]: E0913 00:07:32.676114 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.676683 kubelet[3333]: W0913 00:07:32.676404 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.676683 kubelet[3333]: E0913 00:07:32.676443 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677119 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.684881 kubelet[3333]: W0913 00:07:32.677137 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677162 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677477 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.684881 kubelet[3333]: W0913 00:07:32.677490 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677507 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677738 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.684881 kubelet[3333]: W0913 00:07:32.677750 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677764 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.684881 kubelet[3333]: E0913 00:07:32.677988 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686081 kubelet[3333]: W0913 00:07:32.678002 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686081 kubelet[3333]: E0913 00:07:32.678016 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686081 kubelet[3333]: E0913 00:07:32.678213 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686081 kubelet[3333]: W0913 00:07:32.678224 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686081 kubelet[3333]: E0913 00:07:32.678240 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686081 kubelet[3333]: E0913 00:07:32.678875 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686081 kubelet[3333]: W0913 00:07:32.678912 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686081 kubelet[3333]: E0913 00:07:32.678929 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686081 kubelet[3333]: E0913 00:07:32.679161 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686081 kubelet[3333]: W0913 00:07:32.679174 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.679187 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.679513 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686571 kubelet[3333]: W0913 00:07:32.679525 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.679559 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.679838 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686571 kubelet[3333]: W0913 00:07:32.679866 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.679882 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.680136 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686571 kubelet[3333]: W0913 00:07:32.680149 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686571 kubelet[3333]: E0913 00:07:32.680161 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.680511 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686913 kubelet[3333]: W0913 00:07:32.680753 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.680780 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.681027 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686913 kubelet[3333]: W0913 00:07:32.681039 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.681343 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.681638 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.686913 kubelet[3333]: W0913 00:07:32.681678 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.681693 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.686913 kubelet[3333]: E0913 00:07:32.682900 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.687147 kubelet[3333]: W0913 00:07:32.682914 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.687147 kubelet[3333]: E0913 00:07:32.682929 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.706528 kubelet[3333]: E0913 00:07:32.706416 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.706528 kubelet[3333]: W0913 00:07:32.706441 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.706528 kubelet[3333]: E0913 00:07:32.706463 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.706837 kubelet[3333]: E0913 00:07:32.706791 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.706837 kubelet[3333]: W0913 00:07:32.706804 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.706837 kubelet[3333]: E0913 00:07:32.706822 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.710291 kubelet[3333]: E0913 00:07:32.710148 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.710291 kubelet[3333]: W0913 00:07:32.710170 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.710291 kubelet[3333]: E0913 00:07:32.710196 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.710937 kubelet[3333]: E0913 00:07:32.710831 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.710937 kubelet[3333]: W0913 00:07:32.710846 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.710937 kubelet[3333]: E0913 00:07:32.710863 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.711452 kubelet[3333]: E0913 00:07:32.711401 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.711452 kubelet[3333]: W0913 00:07:32.711414 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.711452 kubelet[3333]: E0913 00:07:32.711431 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.711681 kubelet[3333]: E0913 00:07:32.711659 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.711681 kubelet[3333]: W0913 00:07:32.711675 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.711812 kubelet[3333]: E0913 00:07:32.711702 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.711978 kubelet[3333]: E0913 00:07:32.711936 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.711978 kubelet[3333]: W0913 00:07:32.711954 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.711978 kubelet[3333]: E0913 00:07:32.711971 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.712234 kubelet[3333]: E0913 00:07:32.712174 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.712234 kubelet[3333]: W0913 00:07:32.712181 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.712332 kubelet[3333]: E0913 00:07:32.712303 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.712523 kubelet[3333]: E0913 00:07:32.712491 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.712523 kubelet[3333]: W0913 00:07:32.712504 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.712683 kubelet[3333]: E0913 00:07:32.712665 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.712846 kubelet[3333]: E0913 00:07:32.712827 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.712846 kubelet[3333]: W0913 00:07:32.712841 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.712928 kubelet[3333]: E0913 00:07:32.712857 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.713299 kubelet[3333]: E0913 00:07:32.713279 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.713299 kubelet[3333]: W0913 00:07:32.713296 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.713377 kubelet[3333]: E0913 00:07:32.713315 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.713562 kubelet[3333]: E0913 00:07:32.713543 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.713623 kubelet[3333]: W0913 00:07:32.713557 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.713623 kubelet[3333]: E0913 00:07:32.713588 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.713896 kubelet[3333]: E0913 00:07:32.713877 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.713896 kubelet[3333]: W0913 00:07:32.713889 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.713991 kubelet[3333]: E0913 00:07:32.713962 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.714249 kubelet[3333]: E0913 00:07:32.714230 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.714249 kubelet[3333]: W0913 00:07:32.714244 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.714446 kubelet[3333]: E0913 00:07:32.714260 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.714484 kubelet[3333]: E0913 00:07:32.714447 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.714484 kubelet[3333]: W0913 00:07:32.714454 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.714484 kubelet[3333]: E0913 00:07:32.714466 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.714680 kubelet[3333]: E0913 00:07:32.714664 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.714680 kubelet[3333]: W0913 00:07:32.714675 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.714776 kubelet[3333]: E0913 00:07:32.714689 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.715337 kubelet[3333]: E0913 00:07:32.715319 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.715337 kubelet[3333]: W0913 00:07:32.715333 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.715486 kubelet[3333]: E0913 00:07:32.715452 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.715629 kubelet[3333]: E0913 00:07:32.715613 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:32.715675 kubelet[3333]: W0913 00:07:32.715625 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:32.715675 kubelet[3333]: E0913 00:07:32.715645 3333 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:32.850745 containerd[2108]: time="2025-09-13T00:07:32.850286487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:32.852765 containerd[2108]: time="2025-09-13T00:07:32.852692791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:07:32.855213 containerd[2108]: time="2025-09-13T00:07:32.855151213Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:32.858404 containerd[2108]: time="2025-09-13T00:07:32.858340957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:32.859107 containerd[2108]: time="2025-09-13T00:07:32.858975498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.260058986s" Sep 13 00:07:32.859107 containerd[2108]: time="2025-09-13T00:07:32.859010215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:07:32.862770 containerd[2108]: time="2025-09-13T00:07:32.862636861Z" level=info msg="CreateContainer within sandbox \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:07:32.900179 containerd[2108]: time="2025-09-13T00:07:32.900125451Z" level=info msg="CreateContainer within sandbox \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f589223b91453dba7309c26217c3136e666c7f4166ff6713f5b2622c15b90cfd\"" Sep 13 00:07:32.901102 containerd[2108]: time="2025-09-13T00:07:32.900991584Z" level=info msg="StartContainer for \"f589223b91453dba7309c26217c3136e666c7f4166ff6713f5b2622c15b90cfd\"" Sep 13 00:07:33.029910 containerd[2108]: time="2025-09-13T00:07:33.029780093Z" level=info msg="StartContainer for \"f589223b91453dba7309c26217c3136e666c7f4166ff6713f5b2622c15b90cfd\" returns successfully" Sep 13 00:07:33.074141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f589223b91453dba7309c26217c3136e666c7f4166ff6713f5b2622c15b90cfd-rootfs.mount: Deactivated successfully. Sep 13 00:07:33.181842 containerd[2108]: time="2025-09-13T00:07:33.172401348Z" level=info msg="shim disconnected" id=f589223b91453dba7309c26217c3136e666c7f4166ff6713f5b2622c15b90cfd namespace=k8s.io Sep 13 00:07:33.181842 containerd[2108]: time="2025-09-13T00:07:33.181630057Z" level=warning msg="cleaning up after shim disconnected" id=f589223b91453dba7309c26217c3136e666c7f4166ff6713f5b2622c15b90cfd namespace=k8s.io Sep 13 00:07:33.181842 containerd[2108]: time="2025-09-13T00:07:33.181648430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:33.590085 kubelet[3333]: I0913 00:07:33.589525 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:33.592785 containerd[2108]: time="2025-09-13T00:07:33.592748223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:07:33.617735 kubelet[3333]: I0913 00:07:33.617642 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78b67fb446-zdht9" podStartSLOduration=3.663510302 podStartE2EDuration="6.617616815s" podCreationTimestamp="2025-09-13 00:07:27 +0000 UTC" firstStartedPulling="2025-09-13 00:07:28.641888632 +0000 UTC m=+19.402904509" lastFinishedPulling="2025-09-13 00:07:31.595995151 +0000 UTC m=+22.357011022" observedRunningTime="2025-09-13 00:07:32.59916474 +0000 UTC m=+23.360180633" watchObservedRunningTime="2025-09-13 00:07:33.617616815 +0000 UTC m=+24.378632705" Sep 13 00:07:34.412132 kubelet[3333]: E0913 00:07:34.411966 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:36.413242 kubelet[3333]: E0913 00:07:36.411853 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:36.690819 containerd[2108]: time="2025-09-13T00:07:36.690690808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:36.695474 containerd[2108]: time="2025-09-13T00:07:36.695410470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:07:36.702314 containerd[2108]: time="2025-09-13T00:07:36.701061986Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:36.704239 containerd[2108]: time="2025-09-13T00:07:36.704187986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:36.705489 containerd[2108]: time="2025-09-13T00:07:36.704731139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.111941727s" Sep 13 00:07:36.705489 containerd[2108]: time="2025-09-13T00:07:36.704761804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:07:36.711863 containerd[2108]: time="2025-09-13T00:07:36.710692641Z" level=info msg="CreateContainer within sandbox \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:07:36.735484 containerd[2108]: time="2025-09-13T00:07:36.735436289Z" level=info msg="CreateContainer within sandbox \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98bb1a409c2a29834b21de395a987978e6f478a9dbe2374243919f12b1025ff7\"" Sep 13 00:07:36.736674 containerd[2108]: time="2025-09-13T00:07:36.736023728Z" level=info msg="StartContainer for \"98bb1a409c2a29834b21de395a987978e6f478a9dbe2374243919f12b1025ff7\"" Sep 13 00:07:36.813611 containerd[2108]: time="2025-09-13T00:07:36.813563699Z" level=info msg="StartContainer for \"98bb1a409c2a29834b21de395a987978e6f478a9dbe2374243919f12b1025ff7\" returns successfully" Sep 13 00:07:37.814784 kubelet[3333]: I0913 00:07:37.814438 3333 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:07:37.815486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98bb1a409c2a29834b21de395a987978e6f478a9dbe2374243919f12b1025ff7-rootfs.mount: Deactivated successfully. Sep 13 00:07:37.819219 containerd[2108]: time="2025-09-13T00:07:37.818541732Z" level=info msg="shim disconnected" id=98bb1a409c2a29834b21de395a987978e6f478a9dbe2374243919f12b1025ff7 namespace=k8s.io Sep 13 00:07:37.819219 containerd[2108]: time="2025-09-13T00:07:37.818612403Z" level=warning msg="cleaning up after shim disconnected" id=98bb1a409c2a29834b21de395a987978e6f478a9dbe2374243919f12b1025ff7 namespace=k8s.io Sep 13 00:07:37.819219 containerd[2108]: time="2025-09-13T00:07:37.818624384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:37.953670 kubelet[3333]: I0913 00:07:37.953261 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsq2j\" (UniqueName: \"kubernetes.io/projected/f6b1d66d-bda4-482c-8530-6567389b0a59-kube-api-access-jsq2j\") pod \"coredns-7c65d6cfc9-f9s78\" (UID: \"f6b1d66d-bda4-482c-8530-6567389b0a59\") " pod="kube-system/coredns-7c65d6cfc9-f9s78" Sep 13 00:07:37.953670 kubelet[3333]: I0913 00:07:37.953304 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdt7d\" (UniqueName: \"kubernetes.io/projected/270d8e62-8736-4a5e-8bc3-f2ede76f3e76-kube-api-access-sdt7d\") pod \"goldmane-7988f88666-zhk7w\" (UID: \"270d8e62-8736-4a5e-8bc3-f2ede76f3e76\") " pod="calico-system/goldmane-7988f88666-zhk7w" Sep 13 00:07:37.953670 kubelet[3333]: I0913 00:07:37.953325 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/270d8e62-8736-4a5e-8bc3-f2ede76f3e76-config\") pod \"goldmane-7988f88666-zhk7w\" (UID: \"270d8e62-8736-4a5e-8bc3-f2ede76f3e76\") " pod="calico-system/goldmane-7988f88666-zhk7w" Sep 13 00:07:37.953670 kubelet[3333]: I0913 00:07:37.953345 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4473fa4-a590-42c7-aa32-c3ce2e18df44-calico-apiserver-certs\") pod \"calico-apiserver-8986d45d5-vzq9f\" (UID: \"d4473fa4-a590-42c7-aa32-c3ce2e18df44\") " pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" Sep 13 00:07:37.953670 kubelet[3333]: I0913 00:07:37.953363 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gpp\" (UniqueName: \"kubernetes.io/projected/58082458-d541-4695-8472-c49eaa5420d4-kube-api-access-p2gpp\") pod \"calico-kube-controllers-6b685ff94f-rk6kg\" (UID: \"58082458-d541-4695-8472-c49eaa5420d4\") " pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" Sep 13 00:07:37.954078 kubelet[3333]: I0913 00:07:37.953388 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4w8\" (UniqueName: \"kubernetes.io/projected/2ece05fb-73e6-4e68-ab11-712551293c2d-kube-api-access-zz4w8\") pod \"coredns-7c65d6cfc9-zkt4z\" (UID: \"2ece05fb-73e6-4e68-ab11-712551293c2d\") " pod="kube-system/coredns-7c65d6cfc9-zkt4z" Sep 13 00:07:37.954078 kubelet[3333]: I0913 00:07:37.953407 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6b1d66d-bda4-482c-8530-6567389b0a59-config-volume\") pod \"coredns-7c65d6cfc9-f9s78\" (UID: \"f6b1d66d-bda4-482c-8530-6567389b0a59\") " pod="kube-system/coredns-7c65d6cfc9-f9s78" Sep 13 00:07:37.954078 kubelet[3333]: I0913 00:07:37.953423 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/270d8e62-8736-4a5e-8bc3-f2ede76f3e76-goldmane-ca-bundle\") pod \"goldmane-7988f88666-zhk7w\" (UID: \"270d8e62-8736-4a5e-8bc3-f2ede76f3e76\") " pod="calico-system/goldmane-7988f88666-zhk7w" Sep 13 00:07:37.954078 kubelet[3333]: I0913 00:07:37.953438 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjmz\" (UniqueName: \"kubernetes.io/projected/d4473fa4-a590-42c7-aa32-c3ce2e18df44-kube-api-access-htjmz\") pod \"calico-apiserver-8986d45d5-vzq9f\" (UID: \"d4473fa4-a590-42c7-aa32-c3ce2e18df44\") " pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" Sep 13 00:07:37.954078 kubelet[3333]: I0913 00:07:37.953459 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9zbd\" (UniqueName: \"kubernetes.io/projected/17ad9243-21d2-4cc5-bd2d-e8206b952e21-kube-api-access-k9zbd\") pod \"whisker-94676679b-f8tw9\" (UID: \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\") " pod="calico-system/whisker-94676679b-f8tw9" Sep 13 00:07:37.954772 kubelet[3333]: I0913 00:07:37.953474 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26c5671e-e9f7-4e86-8b88-1ceb37855ab1-calico-apiserver-certs\") pod \"calico-apiserver-8986d45d5-6tw7s\" (UID: \"26c5671e-e9f7-4e86-8b88-1ceb37855ab1\") " pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" Sep 13 00:07:37.954772 kubelet[3333]: I0913 00:07:37.953489 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ece05fb-73e6-4e68-ab11-712551293c2d-config-volume\") pod \"coredns-7c65d6cfc9-zkt4z\" (UID: \"2ece05fb-73e6-4e68-ab11-712551293c2d\") " pod="kube-system/coredns-7c65d6cfc9-zkt4z" Sep 13 00:07:37.954772 kubelet[3333]: I0913 00:07:37.953507 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/270d8e62-8736-4a5e-8bc3-f2ede76f3e76-goldmane-key-pair\") pod \"goldmane-7988f88666-zhk7w\" (UID: \"270d8e62-8736-4a5e-8bc3-f2ede76f3e76\") " pod="calico-system/goldmane-7988f88666-zhk7w" Sep 13 00:07:37.954772 kubelet[3333]: I0913 00:07:37.953521 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-backend-key-pair\") pod \"whisker-94676679b-f8tw9\" (UID: \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\") " pod="calico-system/whisker-94676679b-f8tw9" Sep 13 00:07:37.954772 kubelet[3333]: I0913 00:07:37.953537 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-ca-bundle\") pod \"whisker-94676679b-f8tw9\" (UID: \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\") " pod="calico-system/whisker-94676679b-f8tw9" Sep 13 00:07:37.954919 kubelet[3333]: I0913 00:07:37.953552 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn54g\" (UniqueName: \"kubernetes.io/projected/26c5671e-e9f7-4e86-8b88-1ceb37855ab1-kube-api-access-mn54g\") pod \"calico-apiserver-8986d45d5-6tw7s\" (UID: \"26c5671e-e9f7-4e86-8b88-1ceb37855ab1\") " pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" Sep 13 00:07:37.954919 kubelet[3333]: I0913 00:07:37.953569 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58082458-d541-4695-8472-c49eaa5420d4-tigera-ca-bundle\") pod \"calico-kube-controllers-6b685ff94f-rk6kg\" (UID: \"58082458-d541-4695-8472-c49eaa5420d4\") " pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" Sep 13 00:07:38.171538 containerd[2108]: time="2025-09-13T00:07:38.171363087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zkt4z,Uid:2ece05fb-73e6-4e68-ab11-712551293c2d,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:38.174108 containerd[2108]: time="2025-09-13T00:07:38.174055237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f9s78,Uid:f6b1d66d-bda4-482c-8530-6567389b0a59,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:38.208384 containerd[2108]: time="2025-09-13T00:07:38.207150008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-6tw7s,Uid:26c5671e-e9f7-4e86-8b88-1ceb37855ab1,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:07:38.209355 containerd[2108]: time="2025-09-13T00:07:38.209176878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b685ff94f-rk6kg,Uid:58082458-d541-4695-8472-c49eaa5420d4,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:38.210983 containerd[2108]: time="2025-09-13T00:07:38.210944003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhk7w,Uid:270d8e62-8736-4a5e-8bc3-f2ede76f3e76,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:38.212720 containerd[2108]: time="2025-09-13T00:07:38.212684807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-vzq9f,Uid:d4473fa4-a590-42c7-aa32-c3ce2e18df44,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:07:38.217049 containerd[2108]: time="2025-09-13T00:07:38.217013365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94676679b-f8tw9,Uid:17ad9243-21d2-4cc5-bd2d-e8206b952e21,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:38.414834 containerd[2108]: time="2025-09-13T00:07:38.414603664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6ztb2,Uid:3735d737-0352-445e-b2c0-da8688517912,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:38.610119 containerd[2108]: time="2025-09-13T00:07:38.609659038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:07:39.068559 containerd[2108]: time="2025-09-13T00:07:39.068494512Z" level=error msg="Failed to destroy network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.075901 containerd[2108]: time="2025-09-13T00:07:39.069165028Z" level=error msg="Failed to destroy network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.081924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1-shm.mount: Deactivated successfully. Sep 13 00:07:39.090470 containerd[2108]: time="2025-09-13T00:07:39.083111604Z" level=error msg="encountered an error cleaning up failed sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.094609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e-shm.mount: Deactivated successfully. Sep 13 00:07:39.099696 containerd[2108]: time="2025-09-13T00:07:39.099643365Z" level=error msg="encountered an error cleaning up failed sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.113301 containerd[2108]: time="2025-09-13T00:07:39.113233235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b685ff94f-rk6kg,Uid:58082458-d541-4695-8472-c49eaa5420d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.123811 containerd[2108]: time="2025-09-13T00:07:39.069253174Z" level=error msg="Failed to destroy network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.126753 containerd[2108]: time="2025-09-13T00:07:39.124250042Z" level=error msg="encountered an error cleaning up failed sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.126753 containerd[2108]: time="2025-09-13T00:07:39.124342762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zkt4z,Uid:2ece05fb-73e6-4e68-ab11-712551293c2d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.132354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7-shm.mount: Deactivated successfully. Sep 13 00:07:39.139837 containerd[2108]: time="2025-09-13T00:07:39.139783086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhk7w,Uid:270d8e62-8736-4a5e-8bc3-f2ede76f3e76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.141406 containerd[2108]: time="2025-09-13T00:07:39.141367067Z" level=error msg="Failed to destroy network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.143123 kubelet[3333]: E0913 00:07:39.142047 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.143123 kubelet[3333]: E0913 00:07:39.142122 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" Sep 13 00:07:39.143123 kubelet[3333]: E0913 00:07:39.142151 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" Sep 13 00:07:39.143659 kubelet[3333]: E0913 00:07:39.142224 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b685ff94f-rk6kg_calico-system(58082458-d541-4695-8472-c49eaa5420d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b685ff94f-rk6kg_calico-system(58082458-d541-4695-8472-c49eaa5420d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" podUID="58082458-d541-4695-8472-c49eaa5420d4" Sep 13 00:07:39.145335 containerd[2108]: time="2025-09-13T00:07:39.144468119Z" level=error msg="Failed to destroy network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.145406 kubelet[3333]: E0913 00:07:39.145339 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.145453 kubelet[3333]: E0913 00:07:39.145419 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zkt4z" Sep 13 00:07:39.145498 kubelet[3333]: E0913 00:07:39.145452 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zkt4z" Sep 13 00:07:39.145549 kubelet[3333]: E0913 00:07:39.145505 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zkt4z_kube-system(2ece05fb-73e6-4e68-ab11-712551293c2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zkt4z_kube-system(2ece05fb-73e6-4e68-ab11-712551293c2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zkt4z" podUID="2ece05fb-73e6-4e68-ab11-712551293c2d" Sep 13 00:07:39.156355 kubelet[3333]: E0913 00:07:39.145576 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156355 kubelet[3333]: E0913 00:07:39.145602 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zhk7w" Sep 13 00:07:39.156355 kubelet[3333]: E0913 00:07:39.145622 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zhk7w" Sep 13 00:07:39.152968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49-shm.mount: Deactivated successfully. Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.147192756Z" level=error msg="encountered an error cleaning up failed sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.147732694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6ztb2,Uid:3735d737-0352-445e-b2c0-da8688517912,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.148970514Z" level=error msg="Failed to destroy network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.149418547Z" level=error msg="encountered an error cleaning up failed sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.149502079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-vzq9f,Uid:d4473fa4-a590-42c7-aa32-c3ce2e18df44,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.149870545Z" level=error msg="Failed to destroy network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.151069724Z" level=error msg="Failed to destroy network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.153189355Z" level=error msg="encountered an error cleaning up failed sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.153257861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f9s78,Uid:f6b1d66d-bda4-482c-8530-6567389b0a59,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.153447349Z" level=error msg="encountered an error cleaning up failed sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.153493852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-6tw7s,Uid:26c5671e-e9f7-4e86-8b88-1ceb37855ab1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.153497025Z" level=error msg="encountered an error cleaning up failed sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.156989 containerd[2108]: time="2025-09-13T00:07:39.154278066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94676679b-f8tw9,Uid:17ad9243-21d2-4cc5-bd2d-e8206b952e21,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.158323 kubelet[3333]: E0913 00:07:39.145654 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-zhk7w_calico-system(270d8e62-8736-4a5e-8bc3-f2ede76f3e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-zhk7w_calico-system(270d8e62-8736-4a5e-8bc3-f2ede76f3e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zhk7w" podUID="270d8e62-8736-4a5e-8bc3-f2ede76f3e76" Sep 13 00:07:39.158323 kubelet[3333]: E0913 00:07:39.154079 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.158323 kubelet[3333]: E0913 00:07:39.154167 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:39.153990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7-shm.mount: Deactivated successfully. Sep 13 00:07:39.159719 kubelet[3333]: E0913 00:07:39.154209 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6ztb2" Sep 13 00:07:39.159719 kubelet[3333]: E0913 00:07:39.154353 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6ztb2_calico-system(3735d737-0352-445e-b2c0-da8688517912)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6ztb2_calico-system(3735d737-0352-445e-b2c0-da8688517912)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:39.159719 kubelet[3333]: E0913 00:07:39.154534 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.159719 kubelet[3333]: E0913 00:07:39.154744 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.159878 kubelet[3333]: E0913 00:07:39.154974 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f9s78" Sep 13 00:07:39.159878 kubelet[3333]: E0913 00:07:39.155003 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f9s78" Sep 13 00:07:39.159878 kubelet[3333]: E0913 00:07:39.154782 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" Sep 13 00:07:39.159878 kubelet[3333]: E0913 00:07:39.155143 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" Sep 13 00:07:39.159984 kubelet[3333]: E0913 00:07:39.155303 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f9s78_kube-system(f6b1d66d-bda4-482c-8530-6567389b0a59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f9s78_kube-system(f6b1d66d-bda4-482c-8530-6567389b0a59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f9s78" podUID="f6b1d66d-bda4-482c-8530-6567389b0a59" Sep 13 00:07:39.159984 kubelet[3333]: E0913 00:07:39.155181 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8986d45d5-vzq9f_calico-apiserver(d4473fa4-a590-42c7-aa32-c3ce2e18df44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8986d45d5-vzq9f_calico-apiserver(d4473fa4-a590-42c7-aa32-c3ce2e18df44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" podUID="d4473fa4-a590-42c7-aa32-c3ce2e18df44" Sep 13 00:07:39.159984 kubelet[3333]: E0913 00:07:39.155613 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.160204 kubelet[3333]: E0913 00:07:39.155646 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" Sep 13 00:07:39.160204 kubelet[3333]: E0913 00:07:39.155974 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" Sep 13 00:07:39.160204 kubelet[3333]: E0913 00:07:39.155864 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.160204 kubelet[3333]: E0913 00:07:39.156161 3333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-94676679b-f8tw9" Sep 13 00:07:39.160313 kubelet[3333]: E0913 00:07:39.156185 3333 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-94676679b-f8tw9" Sep 13 00:07:39.160313 kubelet[3333]: E0913 00:07:39.156221 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-94676679b-f8tw9_calico-system(17ad9243-21d2-4cc5-bd2d-e8206b952e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-94676679b-f8tw9_calico-system(17ad9243-21d2-4cc5-bd2d-e8206b952e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-94676679b-f8tw9" podUID="17ad9243-21d2-4cc5-bd2d-e8206b952e21" Sep 13 00:07:39.160313 kubelet[3333]: E0913 00:07:39.156020 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8986d45d5-6tw7s_calico-apiserver(26c5671e-e9f7-4e86-8b88-1ceb37855ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8986d45d5-6tw7s_calico-apiserver(26c5671e-e9f7-4e86-8b88-1ceb37855ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" podUID="26c5671e-e9f7-4e86-8b88-1ceb37855ab1" Sep 13 00:07:39.610456 kubelet[3333]: I0913 00:07:39.610420 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:07:39.613796 kubelet[3333]: I0913 00:07:39.613766 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:07:39.646308 kubelet[3333]: I0913 00:07:39.645308 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:07:39.664731 containerd[2108]: time="2025-09-13T00:07:39.663424355Z" level=info msg="StopPodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\"" Sep 13 00:07:39.666723 containerd[2108]: time="2025-09-13T00:07:39.665588921Z" level=info msg="Ensure that sandbox 968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7 in task-service has been cleanup successfully" Sep 13 00:07:39.666723 containerd[2108]: time="2025-09-13T00:07:39.666343374Z" level=info msg="StopPodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\"" Sep 13 00:07:39.666723 containerd[2108]: time="2025-09-13T00:07:39.666509226Z" level=info msg="Ensure that sandbox 624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f in task-service has been cleanup successfully" Sep 13 00:07:39.670175 containerd[2108]: time="2025-09-13T00:07:39.670132049Z" level=info msg="StopPodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\"" Sep 13 00:07:39.670362 containerd[2108]: time="2025-09-13T00:07:39.670346094Z" level=info msg="Ensure that sandbox 5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906 in task-service has been cleanup successfully" Sep 13 00:07:39.671597 kubelet[3333]: I0913 00:07:39.671542 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:07:39.674035 containerd[2108]: time="2025-09-13T00:07:39.673889372Z" level=info msg="StopPodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\"" Sep 13 00:07:39.674758 containerd[2108]: time="2025-09-13T00:07:39.674727251Z" level=info msg="Ensure that sandbox abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49 in task-service has been cleanup successfully" Sep 13 00:07:39.678002 kubelet[3333]: I0913 00:07:39.677829 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:07:39.681120 containerd[2108]: time="2025-09-13T00:07:39.680783987Z" level=info msg="StopPodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\"" Sep 13 00:07:39.685418 containerd[2108]: time="2025-09-13T00:07:39.685375905Z" level=info msg="Ensure that sandbox 05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7 in task-service has been cleanup successfully" Sep 13 00:07:39.688545 kubelet[3333]: I0913 00:07:39.688513 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:07:39.694185 containerd[2108]: time="2025-09-13T00:07:39.694142470Z" level=info msg="StopPodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\"" Sep 13 00:07:39.699421 containerd[2108]: time="2025-09-13T00:07:39.699377736Z" level=info msg="Ensure that sandbox 4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e in task-service has been cleanup successfully" Sep 13 00:07:39.773472 kubelet[3333]: I0913 00:07:39.773392 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:07:39.782903 containerd[2108]: time="2025-09-13T00:07:39.782859728Z" level=info msg="StopPodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\"" Sep 13 00:07:39.783152 containerd[2108]: time="2025-09-13T00:07:39.783123125Z" level=info msg="Ensure that sandbox a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1 in task-service has been cleanup successfully" Sep 13 00:07:39.785604 containerd[2108]: time="2025-09-13T00:07:39.785555571Z" level=error msg="StopPodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" failed" error="failed to destroy network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.786661 kubelet[3333]: E0913 00:07:39.786614 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:07:39.788957 kubelet[3333]: E0913 00:07:39.786687 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906"} Sep 13 00:07:39.789095 kubelet[3333]: E0913 00:07:39.788998 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26c5671e-e9f7-4e86-8b88-1ceb37855ab1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.789095 kubelet[3333]: E0913 00:07:39.789037 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26c5671e-e9f7-4e86-8b88-1ceb37855ab1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" podUID="26c5671e-e9f7-4e86-8b88-1ceb37855ab1" Sep 13 00:07:39.813253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6-shm.mount: Deactivated successfully. Sep 13 00:07:39.813924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906-shm.mount: Deactivated successfully. Sep 13 00:07:39.814074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f-shm.mount: Deactivated successfully. Sep 13 00:07:39.822298 kubelet[3333]: I0913 00:07:39.821815 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:07:39.825015 containerd[2108]: time="2025-09-13T00:07:39.824946057Z" level=info msg="StopPodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\"" Sep 13 00:07:39.831014 containerd[2108]: time="2025-09-13T00:07:39.830938830Z" level=info msg="Ensure that sandbox e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6 in task-service has been cleanup successfully" Sep 13 00:07:39.920075 containerd[2108]: time="2025-09-13T00:07:39.920022040Z" level=error msg="StopPodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" failed" error="failed to destroy network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.920437 containerd[2108]: time="2025-09-13T00:07:39.920405583Z" level=error msg="StopPodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" failed" error="failed to destroy network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.920774 kubelet[3333]: E0913 00:07:39.920738 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:07:39.921402 kubelet[3333]: E0913 00:07:39.920947 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7"} Sep 13 00:07:39.921402 kubelet[3333]: E0913 00:07:39.921000 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ece05fb-73e6-4e68-ab11-712551293c2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.921402 kubelet[3333]: E0913 00:07:39.921033 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ece05fb-73e6-4e68-ab11-712551293c2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zkt4z" podUID="2ece05fb-73e6-4e68-ab11-712551293c2d" Sep 13 00:07:39.921402 kubelet[3333]: E0913 00:07:39.920738 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:07:39.921402 kubelet[3333]: E0913 00:07:39.921072 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7"} Sep 13 00:07:39.922078 kubelet[3333]: E0913 00:07:39.921229 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4473fa4-a590-42c7-aa32-c3ce2e18df44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.922078 kubelet[3333]: E0913 00:07:39.921336 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4473fa4-a590-42c7-aa32-c3ce2e18df44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" podUID="d4473fa4-a590-42c7-aa32-c3ce2e18df44" Sep 13 00:07:39.933934 containerd[2108]: time="2025-09-13T00:07:39.933883222Z" level=error msg="StopPodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" failed" error="failed to destroy network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.937739 containerd[2108]: time="2025-09-13T00:07:39.934615435Z" level=error msg="StopPodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" failed" error="failed to destroy network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.937829 kubelet[3333]: E0913 00:07:39.934184 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:07:39.937829 kubelet[3333]: E0913 00:07:39.934237 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49"} Sep 13 00:07:39.937829 kubelet[3333]: E0913 00:07:39.934287 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3735d737-0352-445e-b2c0-da8688517912\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.937829 kubelet[3333]: E0913 00:07:39.934316 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3735d737-0352-445e-b2c0-da8688517912\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6ztb2" podUID="3735d737-0352-445e-b2c0-da8688517912" Sep 13 00:07:39.938117 kubelet[3333]: E0913 00:07:39.934828 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:07:39.938117 kubelet[3333]: E0913 00:07:39.934907 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f"} Sep 13 00:07:39.938117 kubelet[3333]: E0913 00:07:39.934945 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6b1d66d-bda4-482c-8530-6567389b0a59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.938117 kubelet[3333]: E0913 00:07:39.934998 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6b1d66d-bda4-482c-8530-6567389b0a59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f9s78" podUID="f6b1d66d-bda4-482c-8530-6567389b0a59" Sep 13 00:07:39.971406 containerd[2108]: time="2025-09-13T00:07:39.971354706Z" level=error msg="StopPodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" failed" error="failed to destroy network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.971761 kubelet[3333]: E0913 00:07:39.971717 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:07:39.971914 kubelet[3333]: E0913 00:07:39.971779 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e"} Sep 13 00:07:39.971914 kubelet[3333]: E0913 00:07:39.971821 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"270d8e62-8736-4a5e-8bc3-f2ede76f3e76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.971914 kubelet[3333]: E0913 00:07:39.971853 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"270d8e62-8736-4a5e-8bc3-f2ede76f3e76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zhk7w" podUID="270d8e62-8736-4a5e-8bc3-f2ede76f3e76" Sep 13 00:07:39.983832 containerd[2108]: time="2025-09-13T00:07:39.983678209Z" level=error msg="StopPodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" failed" error="failed to destroy network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:39.984101 kubelet[3333]: E0913 00:07:39.984031 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:07:39.984206 kubelet[3333]: E0913 00:07:39.984109 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1"} Sep 13 00:07:39.984206 kubelet[3333]: E0913 00:07:39.984155 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58082458-d541-4695-8472-c49eaa5420d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:39.984572 kubelet[3333]: E0913 00:07:39.984202 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58082458-d541-4695-8472-c49eaa5420d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" podUID="58082458-d541-4695-8472-c49eaa5420d4" Sep 13 00:07:40.008488 containerd[2108]: time="2025-09-13T00:07:40.008398568Z" level=error msg="StopPodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" failed" error="failed to destroy network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:40.008909 kubelet[3333]: E0913 00:07:40.008856 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:07:40.009391 kubelet[3333]: E0913 00:07:40.009269 3333 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6"} Sep 13 00:07:40.009391 kubelet[3333]: E0913 00:07:40.009348 3333 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:40.009573 kubelet[3333]: E0913 00:07:40.009538 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-94676679b-f8tw9" podUID="17ad9243-21d2-4cc5-bd2d-e8206b952e21" Sep 13 00:07:43.253203 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:07:43.250978 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:07:43.251060 systemd-resolved[1989]: Flushed all caches. Sep 13 00:07:44.687902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663327305.mount: Deactivated successfully. Sep 13 00:07:44.772855 containerd[2108]: time="2025-09-13T00:07:44.772790530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:07:44.823462 containerd[2108]: time="2025-09-13T00:07:44.821662274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:44.871172 containerd[2108]: time="2025-09-13T00:07:44.864129155Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:44.871172 containerd[2108]: time="2025-09-13T00:07:44.865253636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:44.871172 containerd[2108]: time="2025-09-13T00:07:44.868944379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.25652752s" Sep 13 00:07:44.871172 containerd[2108]: time="2025-09-13T00:07:44.868995705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:07:44.970791 containerd[2108]: time="2025-09-13T00:07:44.970661641Z" level=info msg="CreateContainer within sandbox \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:07:45.065044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100764048.mount: Deactivated successfully. Sep 13 00:07:45.090381 containerd[2108]: time="2025-09-13T00:07:45.089005447Z" level=info msg="CreateContainer within sandbox \"7d0d409e4ccff73b026fe4b766838027927c10575b503e436f60a3a2d6aea453\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"adcbf58e85eae02a58ff62fc323350e1a086f6770563cc66ef9e195bf585120d\"" Sep 13 00:07:45.092517 containerd[2108]: time="2025-09-13T00:07:45.092461137Z" level=info msg="StartContainer for \"adcbf58e85eae02a58ff62fc323350e1a086f6770563cc66ef9e195bf585120d\"" Sep 13 00:07:45.272432 containerd[2108]: time="2025-09-13T00:07:45.272118718Z" level=info msg="StartContainer for \"adcbf58e85eae02a58ff62fc323350e1a086f6770563cc66ef9e195bf585120d\" returns successfully" Sep 13 00:07:45.398743 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:07:45.398855 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:07:45.908509 kubelet[3333]: I0913 00:07:45.906942 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:46.066477 kubelet[3333]: I0913 00:07:46.043824 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cgjf4" podStartSLOduration=1.913949856 podStartE2EDuration="18.025686227s" podCreationTimestamp="2025-09-13 00:07:28 +0000 UTC" firstStartedPulling="2025-09-13 00:07:28.761828112 +0000 UTC m=+19.522843988" lastFinishedPulling="2025-09-13 00:07:44.873564484 +0000 UTC m=+35.634580359" observedRunningTime="2025-09-13 00:07:45.908546974 +0000 UTC m=+36.669562863" watchObservedRunningTime="2025-09-13 00:07:46.025686227 +0000 UTC m=+36.786702166" Sep 13 00:07:46.080620 containerd[2108]: time="2025-09-13T00:07:46.080562273Z" level=info msg="StopPodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\"" Sep 13 00:07:46.887596 kubelet[3333]: I0913 00:07:46.885796 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:47.287767 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:07:47.289861 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:07:47.289873 systemd-resolved[1989]: Flushed all caches. Sep 13 00:07:47.449759 kernel: bpftool[4936]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:46.285 [INFO][4803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:46.286 [INFO][4803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" iface="eth0" netns="/var/run/netns/cni-6301d408-ab86-769f-fb23-2bd04965c5ac" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:46.287 [INFO][4803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" iface="eth0" netns="/var/run/netns/cni-6301d408-ab86-769f-fb23-2bd04965c5ac" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:46.288 [INFO][4803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" iface="eth0" netns="/var/run/netns/cni-6301d408-ab86-769f-fb23-2bd04965c5ac" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:46.288 [INFO][4803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:46.288 [INFO][4803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.437 [INFO][4811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.442 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.443 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.462 [WARNING][4811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.462 [INFO][4811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.464 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:47.468609 containerd[2108]: 2025-09-13 00:07:47.466 [INFO][4803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:07:47.477283 systemd[1]: run-netns-cni\x2d6301d408\x2dab86\x2d769f\x2dfb23\x2d2bd04965c5ac.mount: Deactivated successfully. Sep 13 00:07:47.478723 containerd[2108]: time="2025-09-13T00:07:47.478507535Z" level=info msg="TearDown network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" successfully" Sep 13 00:07:47.478723 containerd[2108]: time="2025-09-13T00:07:47.478566658Z" level=info msg="StopPodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" returns successfully" Sep 13 00:07:47.571798 kubelet[3333]: I0913 00:07:47.569301 3333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-ca-bundle\") pod \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\" (UID: \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\") " Sep 13 00:07:47.571798 kubelet[3333]: I0913 00:07:47.569367 3333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9zbd\" (UniqueName: \"kubernetes.io/projected/17ad9243-21d2-4cc5-bd2d-e8206b952e21-kube-api-access-k9zbd\") pod \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\" (UID: \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\") " Sep 13 00:07:47.571798 kubelet[3333]: I0913 00:07:47.569400 3333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-backend-key-pair\") pod \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\" (UID: \"17ad9243-21d2-4cc5-bd2d-e8206b952e21\") " Sep 13 00:07:47.596563 systemd[1]: var-lib-kubelet-pods-17ad9243\x2d21d2\x2d4cc5\x2dbd2d\x2de8206b952e21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9zbd.mount: Deactivated successfully. Sep 13 00:07:47.612757 kubelet[3333]: I0913 00:07:47.605030 3333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "17ad9243-21d2-4cc5-bd2d-e8206b952e21" (UID: "17ad9243-21d2-4cc5-bd2d-e8206b952e21"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:07:47.612757 kubelet[3333]: I0913 00:07:47.609824 3333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "17ad9243-21d2-4cc5-bd2d-e8206b952e21" (UID: "17ad9243-21d2-4cc5-bd2d-e8206b952e21"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:07:47.612757 kubelet[3333]: I0913 00:07:47.609900 3333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ad9243-21d2-4cc5-bd2d-e8206b952e21-kube-api-access-k9zbd" (OuterVolumeSpecName: "kube-api-access-k9zbd") pod "17ad9243-21d2-4cc5-bd2d-e8206b952e21" (UID: "17ad9243-21d2-4cc5-bd2d-e8206b952e21"). InnerVolumeSpecName "kube-api-access-k9zbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:07:47.609204 systemd[1]: var-lib-kubelet-pods-17ad9243\x2d21d2\x2d4cc5\x2dbd2d\x2de8206b952e21-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:07:47.670622 kubelet[3333]: I0913 00:07:47.670518 3333 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-backend-key-pair\") on node \"ip-172-31-25-42\" DevicePath \"\"" Sep 13 00:07:47.670622 kubelet[3333]: I0913 00:07:47.670555 3333 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17ad9243-21d2-4cc5-bd2d-e8206b952e21-whisker-ca-bundle\") on node \"ip-172-31-25-42\" DevicePath \"\"" Sep 13 00:07:47.670622 kubelet[3333]: I0913 00:07:47.670570 3333 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9zbd\" (UniqueName: \"kubernetes.io/projected/17ad9243-21d2-4cc5-bd2d-e8206b952e21-kube-api-access-k9zbd\") on node \"ip-172-31-25-42\" DevicePath \"\"" Sep 13 00:07:47.739908 (udev-worker)[4768]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:47.745738 systemd-networkd[1657]: vxlan.calico: Link UP Sep 13 00:07:47.746088 systemd-networkd[1657]: vxlan.calico: Gained carrier Sep 13 00:07:47.771071 (udev-worker)[4968]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:47.771541 (udev-worker)[4969]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:48.179591 kubelet[3333]: I0913 00:07:48.179535 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwfj\" (UniqueName: \"kubernetes.io/projected/117a309b-4915-4d9f-9d53-dc10c8b23ba2-kube-api-access-5lwfj\") pod \"whisker-97749c699-2lxv5\" (UID: \"117a309b-4915-4d9f-9d53-dc10c8b23ba2\") " pod="calico-system/whisker-97749c699-2lxv5" Sep 13 00:07:48.182726 kubelet[3333]: I0913 00:07:48.180749 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/117a309b-4915-4d9f-9d53-dc10c8b23ba2-whisker-ca-bundle\") pod \"whisker-97749c699-2lxv5\" (UID: \"117a309b-4915-4d9f-9d53-dc10c8b23ba2\") " pod="calico-system/whisker-97749c699-2lxv5" Sep 13 00:07:48.182726 kubelet[3333]: I0913 00:07:48.180884 3333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/117a309b-4915-4d9f-9d53-dc10c8b23ba2-whisker-backend-key-pair\") pod \"whisker-97749c699-2lxv5\" (UID: \"117a309b-4915-4d9f-9d53-dc10c8b23ba2\") " pod="calico-system/whisker-97749c699-2lxv5" Sep 13 00:07:48.391148 containerd[2108]: time="2025-09-13T00:07:48.391103900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97749c699-2lxv5,Uid:117a309b-4915-4d9f-9d53-dc10c8b23ba2,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:48.611879 systemd-networkd[1657]: calicc74251078c: Link UP Sep 13 00:07:48.613928 systemd-networkd[1657]: calicc74251078c: Gained carrier Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.494 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0 whisker-97749c699- calico-system 117a309b-4915-4d9f-9d53-dc10c8b23ba2 913 0 2025-09-13 00:07:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:97749c699 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-25-42 whisker-97749c699-2lxv5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicc74251078c [] [] }} ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.494 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.525 [INFO][5022] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" HandleID="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Workload="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.526 [INFO][5022] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" HandleID="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Workload="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5100), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-42", "pod":"whisker-97749c699-2lxv5", "timestamp":"2025-09-13 00:07:48.525841364 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.526 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.526 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.526 [INFO][5022] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.542 [INFO][5022] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.574 [INFO][5022] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.580 [INFO][5022] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.582 [INFO][5022] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.585 [INFO][5022] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.585 [INFO][5022] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.587 [INFO][5022] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.596 [INFO][5022] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.602 [INFO][5022] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.129/26] block=192.168.105.128/26 handle="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.602 [INFO][5022] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.129/26] handle="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" host="ip-172-31-25-42" Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.602 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:48.632307 containerd[2108]: 2025-09-13 00:07:48.602 [INFO][5022] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.129/26] IPv6=[] ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" HandleID="k8s-pod-network.20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Workload="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.635457 containerd[2108]: 2025-09-13 00:07:48.606 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0", GenerateName:"whisker-97749c699-", Namespace:"calico-system", SelfLink:"", UID:"117a309b-4915-4d9f-9d53-dc10c8b23ba2", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"97749c699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"whisker-97749c699-2lxv5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc74251078c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:48.635457 containerd[2108]: 2025-09-13 00:07:48.606 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.129/32] ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.635457 containerd[2108]: 2025-09-13 00:07:48.606 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc74251078c ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.635457 containerd[2108]: 2025-09-13 00:07:48.615 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.635457 containerd[2108]: 2025-09-13 00:07:48.615 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0", GenerateName:"whisker-97749c699-", Namespace:"calico-system", SelfLink:"", UID:"117a309b-4915-4d9f-9d53-dc10c8b23ba2", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"97749c699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b", Pod:"whisker-97749c699-2lxv5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc74251078c", MAC:"d2:fc:99:2f:dd:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:48.635457 containerd[2108]: 2025-09-13 00:07:48.629 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b" Namespace="calico-system" Pod="whisker-97749c699-2lxv5" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--97749c699--2lxv5-eth0" Sep 13 00:07:48.665918 containerd[2108]: time="2025-09-13T00:07:48.665552274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:48.665918 containerd[2108]: time="2025-09-13T00:07:48.665607049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:48.665918 containerd[2108]: time="2025-09-13T00:07:48.665617346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:48.685769 containerd[2108]: time="2025-09-13T00:07:48.669205505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:48.773730 containerd[2108]: time="2025-09-13T00:07:48.773655894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97749c699-2lxv5,Uid:117a309b-4915-4d9f-9d53-dc10c8b23ba2,Namespace:calico-system,Attempt:0,} returns sandbox id \"20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b\"" Sep 13 00:07:48.786362 containerd[2108]: time="2025-09-13T00:07:48.786306779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:07:48.882051 systemd-networkd[1657]: vxlan.calico: Gained IPv6LL Sep 13 00:07:49.330116 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:07:49.330138 systemd-resolved[1989]: Flushed all caches. Sep 13 00:07:49.331735 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:07:49.418772 kubelet[3333]: I0913 00:07:49.417981 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ad9243-21d2-4cc5-bd2d-e8206b952e21" path="/var/lib/kubelet/pods/17ad9243-21d2-4cc5-bd2d-e8206b952e21/volumes" Sep 13 00:07:50.065778 containerd[2108]: time="2025-09-13T00:07:50.065704291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:50.067737 containerd[2108]: time="2025-09-13T00:07:50.067587074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:07:50.070042 containerd[2108]: time="2025-09-13T00:07:50.069984725Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:50.076729 containerd[2108]: time="2025-09-13T00:07:50.076652883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:50.077982 containerd[2108]: time="2025-09-13T00:07:50.077498976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.291161644s" Sep 13 00:07:50.077982 containerd[2108]: time="2025-09-13T00:07:50.077533171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:07:50.095138 containerd[2108]: time="2025-09-13T00:07:50.095100925Z" level=info msg="CreateContainer within sandbox \"20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:07:50.115147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783399195.mount: Deactivated successfully. Sep 13 00:07:50.120610 containerd[2108]: time="2025-09-13T00:07:50.120550488Z" level=info msg="CreateContainer within sandbox \"20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5819ecea016b3eaf84d1ebb1b4ac7b27fdce7027a30bee8a4d75869fc7357142\"" Sep 13 00:07:50.121341 containerd[2108]: time="2025-09-13T00:07:50.121180051Z" level=info msg="StartContainer for \"5819ecea016b3eaf84d1ebb1b4ac7b27fdce7027a30bee8a4d75869fc7357142\"" Sep 13 00:07:50.198725 containerd[2108]: time="2025-09-13T00:07:50.198581289Z" level=info msg="StartContainer for \"5819ecea016b3eaf84d1ebb1b4ac7b27fdce7027a30bee8a4d75869fc7357142\" returns successfully" Sep 13 00:07:50.200411 containerd[2108]: time="2025-09-13T00:07:50.200110208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:07:50.418883 systemd-networkd[1657]: calicc74251078c: Gained IPv6LL Sep 13 00:07:51.419550 containerd[2108]: time="2025-09-13T00:07:51.418206486Z" level=info msg="StopPodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\"" Sep 13 00:07:51.435385 containerd[2108]: time="2025-09-13T00:07:51.435326058Z" level=info msg="StopPodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\"" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.596 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.603 [INFO][5145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" iface="eth0" netns="/var/run/netns/cni-e9615cf1-c3bf-f762-300b-a98a30ddf3fd" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.603 [INFO][5145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" iface="eth0" netns="/var/run/netns/cni-e9615cf1-c3bf-f762-300b-a98a30ddf3fd" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.603 [INFO][5145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" iface="eth0" netns="/var/run/netns/cni-e9615cf1-c3bf-f762-300b-a98a30ddf3fd" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.605 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.607 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.678 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.678 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.678 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.690 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.690 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.692 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:51.708134 containerd[2108]: 2025-09-13 00:07:51.699 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:07:51.715723 containerd[2108]: time="2025-09-13T00:07:51.711036880Z" level=info msg="TearDown network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" successfully" Sep 13 00:07:51.715723 containerd[2108]: time="2025-09-13T00:07:51.711079956Z" level=info msg="StopPodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" returns successfully" Sep 13 00:07:51.715723 containerd[2108]: time="2025-09-13T00:07:51.714033609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b685ff94f-rk6kg,Uid:58082458-d541-4695-8472-c49eaa5420d4,Namespace:calico-system,Attempt:1,}" Sep 13 00:07:51.712531 systemd[1]: run-netns-cni\x2de9615cf1\x2dc3bf\x2df762\x2d300b\x2da98a30ddf3fd.mount: Deactivated successfully. Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.607 [INFO][5144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.608 [INFO][5144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" iface="eth0" netns="/var/run/netns/cni-b9f3d569-fefa-9e06-7f44-6752ae2a0da9" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.609 [INFO][5144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" iface="eth0" netns="/var/run/netns/cni-b9f3d569-fefa-9e06-7f44-6752ae2a0da9" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.609 [INFO][5144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" iface="eth0" netns="/var/run/netns/cni-b9f3d569-fefa-9e06-7f44-6752ae2a0da9" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.609 [INFO][5144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.609 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.717 [INFO][5160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.717 [INFO][5160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.717 [INFO][5160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.738 [WARNING][5160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.738 [INFO][5160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.742 [INFO][5160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:51.766633 containerd[2108]: 2025-09-13 00:07:51.753 [INFO][5144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:07:51.771577 containerd[2108]: time="2025-09-13T00:07:51.770784775Z" level=info msg="TearDown network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" successfully" Sep 13 00:07:51.771577 containerd[2108]: time="2025-09-13T00:07:51.770822005Z" level=info msg="StopPodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" returns successfully" Sep 13 00:07:51.771577 containerd[2108]: time="2025-09-13T00:07:51.771533530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zkt4z,Uid:2ece05fb-73e6-4e68-ab11-712551293c2d,Namespace:kube-system,Attempt:1,}" Sep 13 00:07:51.774157 systemd[1]: run-netns-cni\x2db9f3d569\x2dfefa\x2d9e06\x2d7f44\x2d6752ae2a0da9.mount: Deactivated successfully. Sep 13 00:07:52.089247 systemd-networkd[1657]: calia1aa6e8ea83: Link UP Sep 13 00:07:52.092126 systemd-networkd[1657]: calia1aa6e8ea83: Gained carrier Sep 13 00:07:52.102275 (udev-worker)[5214]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:51.917 [INFO][5171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0 calico-kube-controllers-6b685ff94f- calico-system 58082458-d541-4695-8472-c49eaa5420d4 931 0 2025-09-13 00:07:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b685ff94f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-42 calico-kube-controllers-6b685ff94f-rk6kg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia1aa6e8ea83 [] [] }} ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:51.917 [INFO][5171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.012 [INFO][5199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" HandleID="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.012 [INFO][5199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" HandleID="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-42", "pod":"calico-kube-controllers-6b685ff94f-rk6kg", "timestamp":"2025-09-13 00:07:52.012465393 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.012 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.012 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.012 [INFO][5199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.022 [INFO][5199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.035 [INFO][5199] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.042 [INFO][5199] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.045 [INFO][5199] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.049 [INFO][5199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.050 [INFO][5199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.054 [INFO][5199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.063 [INFO][5199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.075 [INFO][5199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.130/26] block=192.168.105.128/26 handle="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.075 [INFO][5199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.130/26] handle="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" host="ip-172-31-25-42" Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.076 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:52.135944 containerd[2108]: 2025-09-13 00:07:52.076 [INFO][5199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.130/26] IPv6=[] ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" HandleID="k8s-pod-network.471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.137814 containerd[2108]: 2025-09-13 00:07:52.082 [INFO][5171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0", GenerateName:"calico-kube-controllers-6b685ff94f-", Namespace:"calico-system", SelfLink:"", UID:"58082458-d541-4695-8472-c49eaa5420d4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b685ff94f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"calico-kube-controllers-6b685ff94f-rk6kg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1aa6e8ea83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:52.137814 containerd[2108]: 2025-09-13 00:07:52.082 [INFO][5171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.130/32] ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.137814 containerd[2108]: 2025-09-13 00:07:52.082 [INFO][5171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1aa6e8ea83 ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.137814 containerd[2108]: 2025-09-13 00:07:52.094 [INFO][5171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.137814 containerd[2108]: 2025-09-13 00:07:52.095 [INFO][5171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0", GenerateName:"calico-kube-controllers-6b685ff94f-", Namespace:"calico-system", SelfLink:"", UID:"58082458-d541-4695-8472-c49eaa5420d4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b685ff94f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c", Pod:"calico-kube-controllers-6b685ff94f-rk6kg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1aa6e8ea83", MAC:"2a:99:16:a7:06:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:52.137814 containerd[2108]: 2025-09-13 00:07:52.118 [INFO][5171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c" Namespace="calico-system" Pod="calico-kube-controllers-6b685ff94f-rk6kg" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:07:52.237878 containerd[2108]: time="2025-09-13T00:07:52.237759010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:52.237878 containerd[2108]: time="2025-09-13T00:07:52.237820815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:52.237878 containerd[2108]: time="2025-09-13T00:07:52.237837549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.238965 containerd[2108]: time="2025-09-13T00:07:52.237953689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.265753 systemd-networkd[1657]: cali3c0898084dc: Link UP Sep 13 00:07:52.267892 systemd-networkd[1657]: cali3c0898084dc: Gained carrier Sep 13 00:07:52.314026 systemd[1]: Started sshd@7-172.31.25.42:22-139.178.89.65:54318.service - OpenSSH per-connection server daemon (139.178.89.65:54318). Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.042 [INFO][5188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0 coredns-7c65d6cfc9- kube-system 2ece05fb-73e6-4e68-ab11-712551293c2d 932 0 2025-09-13 00:07:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-42 coredns-7c65d6cfc9-zkt4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c0898084dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.043 [INFO][5188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.175 [INFO][5210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" HandleID="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.175 [INFO][5210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" HandleID="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5a10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-42", "pod":"coredns-7c65d6cfc9-zkt4z", "timestamp":"2025-09-13 00:07:52.175800062 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.176 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.176 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.176 [INFO][5210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.189 [INFO][5210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.203 [INFO][5210] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.212 [INFO][5210] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.217 [INFO][5210] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.223 [INFO][5210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.223 [INFO][5210] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.225 [INFO][5210] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658 Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.238 [INFO][5210] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.249 [INFO][5210] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.131/26] block=192.168.105.128/26 handle="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.249 [INFO][5210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.131/26] handle="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" host="ip-172-31-25-42" Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.250 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:52.317911 containerd[2108]: 2025-09-13 00:07:52.250 [INFO][5210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.131/26] IPv6=[] ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" HandleID="k8s-pod-network.ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.320010 containerd[2108]: 2025-09-13 00:07:52.255 [INFO][5188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ece05fb-73e6-4e68-ab11-712551293c2d", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"coredns-7c65d6cfc9-zkt4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0898084dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:52.320010 containerd[2108]: 2025-09-13 00:07:52.256 [INFO][5188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.131/32] ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.320010 containerd[2108]: 2025-09-13 00:07:52.256 [INFO][5188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0898084dc ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.320010 containerd[2108]: 2025-09-13 00:07:52.269 [INFO][5188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.320010 containerd[2108]: 2025-09-13 00:07:52.271 [INFO][5188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ece05fb-73e6-4e68-ab11-712551293c2d", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658", Pod:"coredns-7c65d6cfc9-zkt4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0898084dc", MAC:"e6:59:14:ca:cd:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:52.320010 containerd[2108]: 2025-09-13 00:07:52.305 [INFO][5188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zkt4z" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:07:52.415606 containerd[2108]: time="2025-09-13T00:07:52.415223015Z" level=info msg="StopPodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\"" Sep 13 00:07:52.417315 containerd[2108]: time="2025-09-13T00:07:52.417264182Z" level=info msg="StopPodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\"" Sep 13 00:07:52.421330 containerd[2108]: time="2025-09-13T00:07:52.421295424Z" level=info msg="StopPodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\"" Sep 13 00:07:52.427538 containerd[2108]: time="2025-09-13T00:07:52.423466280Z" level=info msg="StopPodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\"" Sep 13 00:07:52.449362 containerd[2108]: time="2025-09-13T00:07:52.449250847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b685ff94f-rk6kg,Uid:58082458-d541-4695-8472-c49eaa5420d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c\"" Sep 13 00:07:52.482499 containerd[2108]: time="2025-09-13T00:07:52.476134283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:52.482499 containerd[2108]: time="2025-09-13T00:07:52.476203435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:52.482499 containerd[2108]: time="2025-09-13T00:07:52.476226693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.482499 containerd[2108]: time="2025-09-13T00:07:52.476350941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:52.585381 sshd[5256]: Accepted publickey for core from 139.178.89.65 port 54318 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:52.587982 sshd[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:52.620573 systemd-logind[2075]: New session 8 of user core. Sep 13 00:07:52.624213 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:07:52.921168 containerd[2108]: time="2025-09-13T00:07:52.921119906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zkt4z,Uid:2ece05fb-73e6-4e68-ab11-712551293c2d,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658\"" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.581 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.590 [INFO][5344] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" iface="eth0" netns="/var/run/netns/cni-f3995130-1df3-8322-842d-4137091a9bee" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.604 [INFO][5344] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" iface="eth0" netns="/var/run/netns/cni-f3995130-1df3-8322-842d-4137091a9bee" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.604 [INFO][5344] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" iface="eth0" netns="/var/run/netns/cni-f3995130-1df3-8322-842d-4137091a9bee" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.604 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.604 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.863 [INFO][5364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.863 [INFO][5364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.866 [INFO][5364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.899 [WARNING][5364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.899 [INFO][5364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.901 [INFO][5364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:52.943641 containerd[2108]: 2025-09-13 00:07:52.913 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:07:52.944258 containerd[2108]: time="2025-09-13T00:07:52.943989828Z" level=info msg="CreateContainer within sandbox \"ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:52.952134 containerd[2108]: time="2025-09-13T00:07:52.949508346Z" level=info msg="TearDown network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" successfully" Sep 13 00:07:52.952134 containerd[2108]: time="2025-09-13T00:07:52.949551408Z" level=info msg="StopPodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" returns successfully" Sep 13 00:07:52.952588 systemd[1]: run-netns-cni\x2df3995130\x2d1df3\x2d8322\x2d842d\x2d4137091a9bee.mount: Deactivated successfully. Sep 13 00:07:52.960816 containerd[2108]: time="2025-09-13T00:07:52.957702687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f9s78,Uid:f6b1d66d-bda4-482c-8530-6567389b0a59,Namespace:kube-system,Attempt:1,}" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.767 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.767 [INFO][5335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" iface="eth0" netns="/var/run/netns/cni-fc5b4668-88b9-59a0-b0a1-90eca652723c" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.767 [INFO][5335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" iface="eth0" netns="/var/run/netns/cni-fc5b4668-88b9-59a0-b0a1-90eca652723c" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.768 [INFO][5335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" iface="eth0" netns="/var/run/netns/cni-fc5b4668-88b9-59a0-b0a1-90eca652723c" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.768 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.768 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.885 [INFO][5390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.886 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.901 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.940 [WARNING][5390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.940 [INFO][5390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.958 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:52.979894 containerd[2108]: 2025-09-13 00:07:52.970 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:07:52.979894 containerd[2108]: time="2025-09-13T00:07:52.979819138Z" level=info msg="TearDown network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" successfully" Sep 13 00:07:52.979894 containerd[2108]: time="2025-09-13T00:07:52.979873183Z" level=info msg="StopPodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" returns successfully" Sep 13 00:07:52.983727 containerd[2108]: time="2025-09-13T00:07:52.982670737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6ztb2,Uid:3735d737-0352-445e-b2c0-da8688517912,Namespace:calico-system,Attempt:1,}" Sep 13 00:07:52.991336 systemd[1]: run-netns-cni\x2dfc5b4668\x2d88b9\x2d59a0\x2db0a1\x2d90eca652723c.mount: Deactivated successfully. Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.802 [INFO][5324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.802 [INFO][5324] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" iface="eth0" netns="/var/run/netns/cni-922990c0-a115-0d96-dcc4-4a6ebc29631d" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.803 [INFO][5324] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" iface="eth0" netns="/var/run/netns/cni-922990c0-a115-0d96-dcc4-4a6ebc29631d" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.803 [INFO][5324] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" iface="eth0" netns="/var/run/netns/cni-922990c0-a115-0d96-dcc4-4a6ebc29631d" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.803 [INFO][5324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.803 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:52.998 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:53.001 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:53.002 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:53.047 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:53.047 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:53.054 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:53.092348 containerd[2108]: 2025-09-13 00:07:53.068 [INFO][5324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:07:53.092348 containerd[2108]: time="2025-09-13T00:07:53.091675623Z" level=info msg="TearDown network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" successfully" Sep 13 00:07:53.092348 containerd[2108]: time="2025-09-13T00:07:53.091760428Z" level=info msg="StopPodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" returns successfully" Sep 13 00:07:53.093302 containerd[2108]: time="2025-09-13T00:07:53.093021530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-vzq9f,Uid:d4473fa4-a590-42c7-aa32-c3ce2e18df44,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:07:53.122244 containerd[2108]: time="2025-09-13T00:07:53.122104714Z" level=info msg="CreateContainer within sandbox \"ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10d38cd8b7c843c8a1d4a89aa2f4497a3118be0fb9c7bb3570d06d37eaebb7b9\"" Sep 13 00:07:53.126508 containerd[2108]: time="2025-09-13T00:07:53.125618365Z" level=info msg="StartContainer for \"10d38cd8b7c843c8a1d4a89aa2f4497a3118be0fb9c7bb3570d06d37eaebb7b9\"" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:52.879 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:52.879 [INFO][5331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" iface="eth0" netns="/var/run/netns/cni-8f6772de-39d0-e534-f1a5-f8578c9c9563" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:52.879 [INFO][5331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" iface="eth0" netns="/var/run/netns/cni-8f6772de-39d0-e534-f1a5-f8578c9c9563" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:52.890 [INFO][5331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" iface="eth0" netns="/var/run/netns/cni-8f6772de-39d0-e534-f1a5-f8578c9c9563" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:52.890 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:52.890 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.085 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.086 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.087 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.114 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.114 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.116 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:53.130499 containerd[2108]: 2025-09-13 00:07:53.126 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:07:53.131923 containerd[2108]: time="2025-09-13T00:07:53.131884763Z" level=info msg="TearDown network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" successfully" Sep 13 00:07:53.132144 containerd[2108]: time="2025-09-13T00:07:53.132124228Z" level=info msg="StopPodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" returns successfully" Sep 13 00:07:53.133609 containerd[2108]: time="2025-09-13T00:07:53.133580001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhk7w,Uid:270d8e62-8736-4a5e-8bc3-f2ede76f3e76,Namespace:calico-system,Attempt:1,}" Sep 13 00:07:53.431962 containerd[2108]: time="2025-09-13T00:07:53.430976678Z" level=info msg="StopPodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\"" Sep 13 00:07:53.605039 systemd-networkd[1657]: cali6576cfbe4bc: Link UP Sep 13 00:07:53.608878 systemd-networkd[1657]: cali6576cfbe4bc: Gained carrier Sep 13 00:07:53.644631 containerd[2108]: time="2025-09-13T00:07:53.644588375Z" level=info msg="StartContainer for \"10d38cd8b7c843c8a1d4a89aa2f4497a3118be0fb9c7bb3570d06d37eaebb7b9\" returns successfully" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.239 [INFO][5436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0 csi-node-driver- calico-system 3735d737-0352-445e-b2c0-da8688517912 982 0 2025-09-13 00:07:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-42 csi-node-driver-6ztb2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6576cfbe4bc [] [] }} ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.239 [INFO][5436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.381 [INFO][5483] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" HandleID="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.381 [INFO][5483] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" HandleID="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-42", "pod":"csi-node-driver-6ztb2", "timestamp":"2025-09-13 00:07:53.381670678 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.381 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.381 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.382 [INFO][5483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.420 [INFO][5483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.443 [INFO][5483] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.485 [INFO][5483] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.501 [INFO][5483] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.505 [INFO][5483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.505 [INFO][5483] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.514 [INFO][5483] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.534 [INFO][5483] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.551 [INFO][5483] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.132/26] block=192.168.105.128/26 handle="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.552 [INFO][5483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.132/26] handle="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" host="ip-172-31-25-42" Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.553 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:53.730756 containerd[2108]: 2025-09-13 00:07:53.553 [INFO][5483] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.132/26] IPv6=[] ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" HandleID="k8s-pod-network.27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.749112 containerd[2108]: 2025-09-13 00:07:53.584 [INFO][5436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3735d737-0352-445e-b2c0-da8688517912", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"csi-node-driver-6ztb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6576cfbe4bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:53.749112 containerd[2108]: 2025-09-13 00:07:53.587 [INFO][5436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.132/32] ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.749112 containerd[2108]: 2025-09-13 00:07:53.593 [INFO][5436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6576cfbe4bc ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.749112 containerd[2108]: 2025-09-13 00:07:53.620 [INFO][5436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.749112 containerd[2108]: 2025-09-13 00:07:53.625 [INFO][5436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3735d737-0352-445e-b2c0-da8688517912", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd", Pod:"csi-node-driver-6ztb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6576cfbe4bc", MAC:"92:b9:ec:e4:39:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:53.749112 containerd[2108]: 2025-09-13 00:07:53.666 [INFO][5436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd" Namespace="calico-system" Pod="csi-node-driver-6ztb2" WorkloadEndpoint="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:07:53.761095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663238649.mount: Deactivated successfully. Sep 13 00:07:53.761363 systemd[1]: run-netns-cni\x2d922990c0\x2da115\x2d0d96\x2ddcc4\x2d4a6ebc29631d.mount: Deactivated successfully. Sep 13 00:07:53.761506 systemd[1]: run-netns-cni\x2d8f6772de\x2d39d0\x2de534\x2df1a5\x2df8578c9c9563.mount: Deactivated successfully. Sep 13 00:07:53.810682 systemd-networkd[1657]: calia1aa6e8ea83: Gained IPv6LL Sep 13 00:07:53.874913 systemd-networkd[1657]: cali3c0898084dc: Gained IPv6LL Sep 13 00:07:54.065228 systemd-networkd[1657]: cali858573d472a: Link UP Sep 13 00:07:54.069220 systemd-networkd[1657]: cali858573d472a: Gained carrier Sep 13 00:07:54.123911 kubelet[3333]: I0913 00:07:54.122584 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zkt4z" podStartSLOduration=40.122560043 podStartE2EDuration="40.122560043s" podCreationTimestamp="2025-09-13 00:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:54.017860753 +0000 UTC m=+44.778876642" watchObservedRunningTime="2025-09-13 00:07:54.122560043 +0000 UTC m=+44.883575929" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.360 [INFO][5424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0 coredns-7c65d6cfc9- kube-system f6b1d66d-bda4-482c-8530-6567389b0a59 970 0 2025-09-13 00:07:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-42 coredns-7c65d6cfc9-f9s78 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali858573d472a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.362 [INFO][5424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.798 [INFO][5509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" HandleID="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.807 [INFO][5509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" HandleID="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f940), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-42", "pod":"coredns-7c65d6cfc9-f9s78", "timestamp":"2025-09-13 00:07:53.798729036 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.807 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.807 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.807 [INFO][5509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.832 [INFO][5509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.872 [INFO][5509] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.903 [INFO][5509] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.912 [INFO][5509] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.915 [INFO][5509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.915 [INFO][5509] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.920 [INFO][5509] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691 Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.939 [INFO][5509] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.979 [INFO][5509] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.133/26] block=192.168.105.128/26 handle="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.980 [INFO][5509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.133/26] handle="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" host="ip-172-31-25-42" Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.980 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:54.184041 containerd[2108]: 2025-09-13 00:07:53.980 [INFO][5509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.133/26] IPv6=[] ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" HandleID="k8s-pod-network.f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.186195 containerd[2108]: 2025-09-13 00:07:54.025 [INFO][5424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f6b1d66d-bda4-482c-8530-6567389b0a59", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"coredns-7c65d6cfc9-f9s78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali858573d472a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:54.186195 containerd[2108]: 2025-09-13 00:07:54.031 [INFO][5424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.133/32] ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.186195 containerd[2108]: 2025-09-13 00:07:54.031 [INFO][5424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali858573d472a ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.186195 containerd[2108]: 2025-09-13 00:07:54.100 [INFO][5424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.186195 containerd[2108]: 2025-09-13 00:07:54.101 [INFO][5424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f6b1d66d-bda4-482c-8530-6567389b0a59", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691", Pod:"coredns-7c65d6cfc9-f9s78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali858573d472a", MAC:"de:0e:f2:fa:e0:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:54.186195 containerd[2108]: 2025-09-13 00:07:54.150 [INFO][5424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f9s78" WorkloadEndpoint="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:07:54.190520 sshd[5256]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:54.202190 systemd[1]: sshd@7-172.31.25.42:22-139.178.89.65:54318.service: Deactivated successfully. Sep 13 00:07:54.213569 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:54.220045 systemd-logind[2075]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:54.222892 containerd[2108]: time="2025-09-13T00:07:54.217672490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:54.224119 systemd-logind[2075]: Removed session 8. Sep 13 00:07:54.232107 containerd[2108]: time="2025-09-13T00:07:54.228752322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:54.232107 containerd[2108]: time="2025-09-13T00:07:54.228791295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.232107 containerd[2108]: time="2025-09-13T00:07:54.228940694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.267897 systemd-networkd[1657]: cali06e232f7846: Link UP Sep 13 00:07:54.269773 systemd-networkd[1657]: cali06e232f7846: Gained carrier Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:53.587 [INFO][5452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0 calico-apiserver-8986d45d5- calico-apiserver d4473fa4-a590-42c7-aa32-c3ce2e18df44 983 0 2025-09-13 00:07:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8986d45d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-42 calico-apiserver-8986d45d5-vzq9f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06e232f7846 [] [] }} ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:53.588 [INFO][5452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.013 [INFO][5541] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" HandleID="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.019 [INFO][5541] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" HandleID="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-42", "pod":"calico-apiserver-8986d45d5-vzq9f", "timestamp":"2025-09-13 00:07:54.013741614 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.019 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.019 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.019 [INFO][5541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.049 [INFO][5541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.105 [INFO][5541] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.145 [INFO][5541] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.149 [INFO][5541] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.156 [INFO][5541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.156 [INFO][5541] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.160 [INFO][5541] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.173 [INFO][5541] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.219 [INFO][5541] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.134/26] block=192.168.105.128/26 handle="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.224 [INFO][5541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.134/26] handle="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" host="ip-172-31-25-42" Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.227 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:54.338413 containerd[2108]: 2025-09-13 00:07:54.228 [INFO][5541] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.134/26] IPv6=[] ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" HandleID="k8s-pod-network.d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.339911 containerd[2108]: 2025-09-13 00:07:54.259 [INFO][5452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4473fa4-a590-42c7-aa32-c3ce2e18df44", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"calico-apiserver-8986d45d5-vzq9f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06e232f7846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:54.339911 containerd[2108]: 2025-09-13 00:07:54.259 [INFO][5452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.134/32] ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.339911 containerd[2108]: 2025-09-13 00:07:54.262 [INFO][5452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06e232f7846 ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.339911 containerd[2108]: 2025-09-13 00:07:54.271 [INFO][5452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.339911 containerd[2108]: 2025-09-13 00:07:54.278 [INFO][5452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4473fa4-a590-42c7-aa32-c3ce2e18df44", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a", Pod:"calico-apiserver-8986d45d5-vzq9f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06e232f7846", MAC:"92:f6:55:a1:7a:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:54.339911 containerd[2108]: 2025-09-13 00:07:54.304 [INFO][5452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-vzq9f" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:07:54.416016 containerd[2108]: time="2025-09-13T00:07:54.415208993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:54.416016 containerd[2108]: time="2025-09-13T00:07:54.415302069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:54.416016 containerd[2108]: time="2025-09-13T00:07:54.415324783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.416016 containerd[2108]: time="2025-09-13T00:07:54.415459259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.491261 systemd-networkd[1657]: cali1af562894e7: Link UP Sep 13 00:07:54.496432 systemd-networkd[1657]: cali1af562894e7: Gained carrier Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:53.890 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:53.890 [INFO][5525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" iface="eth0" netns="/var/run/netns/cni-f878bfdc-bfc4-ebcf-12e3-790f0fa634cf" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:53.891 [INFO][5525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" iface="eth0" netns="/var/run/netns/cni-f878bfdc-bfc4-ebcf-12e3-790f0fa634cf" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:53.898 [INFO][5525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" iface="eth0" netns="/var/run/netns/cni-f878bfdc-bfc4-ebcf-12e3-790f0fa634cf" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:53.898 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:53.898 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.223 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.223 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.447 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.484 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.484 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.487 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:54.503461 containerd[2108]: 2025-09-13 00:07:54.496 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:07:54.517812 containerd[2108]: time="2025-09-13T00:07:54.516570174Z" level=info msg="TearDown network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" successfully" Sep 13 00:07:54.517812 containerd[2108]: time="2025-09-13T00:07:54.517749305Z" level=info msg="StopPodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" returns successfully" Sep 13 00:07:54.517490 systemd[1]: run-netns-cni\x2df878bfdc\x2dbfc4\x2debcf\x2d12e3\x2d790f0fa634cf.mount: Deactivated successfully. Sep 13 00:07:54.534730 containerd[2108]: time="2025-09-13T00:07:54.532551191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:54.534730 containerd[2108]: time="2025-09-13T00:07:54.534144481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:54.534730 containerd[2108]: time="2025-09-13T00:07:54.534178334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.536332 containerd[2108]: time="2025-09-13T00:07:54.535964517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-6tw7s,Uid:26c5671e-e9f7-4e86-8b88-1ceb37855ab1,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:07:54.539720 containerd[2108]: time="2025-09-13T00:07:54.536994483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:53.546 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0 goldmane-7988f88666- calico-system 270d8e62-8736-4a5e-8bc3-f2ede76f3e76 984 0 2025-09-13 00:07:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-25-42 goldmane-7988f88666-zhk7w eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1af562894e7 [] [] }} ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:53.546 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.178 [INFO][5537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" HandleID="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.179 [INFO][5537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" HandleID="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003140a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-42", "pod":"goldmane-7988f88666-zhk7w", "timestamp":"2025-09-13 00:07:54.178297893 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.179 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.228 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.230 [INFO][5537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.256 [INFO][5537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.356 [INFO][5537] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.390 [INFO][5537] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.395 [INFO][5537] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.409 [INFO][5537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.409 [INFO][5537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.414 [INFO][5537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.425 [INFO][5537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.446 [INFO][5537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.135/26] block=192.168.105.128/26 handle="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.447 [INFO][5537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.135/26] handle="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" host="ip-172-31-25-42" Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.447 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:54.568966 containerd[2108]: 2025-09-13 00:07:54.447 [INFO][5537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.135/26] IPv6=[] ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" HandleID="k8s-pod-network.47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.569908 containerd[2108]: 2025-09-13 00:07:54.456 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"270d8e62-8736-4a5e-8bc3-f2ede76f3e76", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"goldmane-7988f88666-zhk7w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1af562894e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:54.569908 containerd[2108]: 2025-09-13 00:07:54.461 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.135/32] ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.569908 containerd[2108]: 2025-09-13 00:07:54.462 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1af562894e7 ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.569908 containerd[2108]: 2025-09-13 00:07:54.500 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.569908 containerd[2108]: 2025-09-13 00:07:54.504 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"270d8e62-8736-4a5e-8bc3-f2ede76f3e76", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c", Pod:"goldmane-7988f88666-zhk7w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1af562894e7", MAC:"1a:45:33:b5:61:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:54.569908 containerd[2108]: 2025-09-13 00:07:54.533 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c" Namespace="calico-system" Pod="goldmane-7988f88666-zhk7w" WorkloadEndpoint="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:07:54.625299 containerd[2108]: time="2025-09-13T00:07:54.625152607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6ztb2,Uid:3735d737-0352-445e-b2c0-da8688517912,Namespace:calico-system,Attempt:1,} returns sandbox id \"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd\"" Sep 13 00:07:54.682297 containerd[2108]: time="2025-09-13T00:07:54.682171214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f9s78,Uid:f6b1d66d-bda4-482c-8530-6567389b0a59,Namespace:kube-system,Attempt:1,} returns sandbox id \"f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691\"" Sep 13 00:07:54.695776 containerd[2108]: time="2025-09-13T00:07:54.695737946Z" level=info msg="CreateContainer within sandbox \"f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:54.732482 containerd[2108]: time="2025-09-13T00:07:54.732066695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:54.732482 containerd[2108]: time="2025-09-13T00:07:54.732126467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:54.732482 containerd[2108]: time="2025-09-13T00:07:54.732144361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.732482 containerd[2108]: time="2025-09-13T00:07:54.732258293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:54.748982 containerd[2108]: time="2025-09-13T00:07:54.747415096Z" level=info msg="CreateContainer within sandbox \"f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e4247d0bd066b2b50f2153ad56010a6ad57d09eaab655e7df2a649cfa4c4de2\"" Sep 13 00:07:54.753923 containerd[2108]: time="2025-09-13T00:07:54.753497889Z" level=info msg="StartContainer for \"9e4247d0bd066b2b50f2153ad56010a6ad57d09eaab655e7df2a649cfa4c4de2\"" Sep 13 00:07:54.788802 containerd[2108]: time="2025-09-13T00:07:54.788564110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-vzq9f,Uid:d4473fa4-a590-42c7-aa32-c3ce2e18df44,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a\"" Sep 13 00:07:54.833044 systemd[1]: run-containerd-runc-k8s.io-47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c-runc.sPAfxe.mount: Deactivated successfully. Sep 13 00:07:54.834869 systemd-networkd[1657]: cali6576cfbe4bc: Gained IPv6LL Sep 13 00:07:55.071803 containerd[2108]: time="2025-09-13T00:07:55.070892130Z" level=info msg="StartContainer for \"9e4247d0bd066b2b50f2153ad56010a6ad57d09eaab655e7df2a649cfa4c4de2\" returns successfully" Sep 13 00:07:55.109759 systemd-networkd[1657]: cali409a0dc050d: Link UP Sep 13 00:07:55.119946 containerd[2108]: time="2025-09-13T00:07:55.117112839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhk7w,Uid:270d8e62-8736-4a5e-8bc3-f2ede76f3e76,Namespace:calico-system,Attempt:1,} returns sandbox id \"47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c\"" Sep 13 00:07:55.120489 systemd-networkd[1657]: cali409a0dc050d: Gained carrier Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.811 [INFO][5716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0 calico-apiserver-8986d45d5- calico-apiserver 26c5671e-e9f7-4e86-8b88-1ceb37855ab1 1000 0 2025-09-13 00:07:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8986d45d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-42 calico-apiserver-8986d45d5-6tw7s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali409a0dc050d [] [] }} ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.811 [INFO][5716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.914 [INFO][5795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" HandleID="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.914 [INFO][5795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" HandleID="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-42", "pod":"calico-apiserver-8986d45d5-6tw7s", "timestamp":"2025-09-13 00:07:54.914534232 +0000 UTC"}, Hostname:"ip-172-31-25-42", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.915 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.915 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.915 [INFO][5795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-42' Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.936 [INFO][5795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.950 [INFO][5795] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.976 [INFO][5795] ipam/ipam.go 511: Trying affinity for 192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.981 [INFO][5795] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.987 [INFO][5795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.988 [INFO][5795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:54.994 [INFO][5795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849 Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:55.019 [INFO][5795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:55.054 [INFO][5795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.105.136/26] block=192.168.105.128/26 handle="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:55.054 [INFO][5795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.136/26] handle="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" host="ip-172-31-25-42" Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:55.055 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:55.193242 containerd[2108]: 2025-09-13 00:07:55.055 [INFO][5795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.105.136/26] IPv6=[] ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" HandleID="k8s-pod-network.efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.196480 containerd[2108]: 2025-09-13 00:07:55.066 [INFO][5716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"26c5671e-e9f7-4e86-8b88-1ceb37855ab1", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"", Pod:"calico-apiserver-8986d45d5-6tw7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali409a0dc050d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:55.196480 containerd[2108]: 2025-09-13 00:07:55.070 [INFO][5716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.136/32] ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.196480 containerd[2108]: 2025-09-13 00:07:55.070 [INFO][5716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali409a0dc050d ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.196480 containerd[2108]: 2025-09-13 00:07:55.137 [INFO][5716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.196480 containerd[2108]: 2025-09-13 00:07:55.143 [INFO][5716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"26c5671e-e9f7-4e86-8b88-1ceb37855ab1", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849", Pod:"calico-apiserver-8986d45d5-6tw7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali409a0dc050d", MAC:"fe:fc:a5:4c:da:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:55.196480 containerd[2108]: 2025-09-13 00:07:55.182 [INFO][5716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849" Namespace="calico-apiserver" Pod="calico-apiserver-8986d45d5-6tw7s" WorkloadEndpoint="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:07:55.285643 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:07:55.282120 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:07:55.283789 systemd-resolved[1989]: Flushed all caches. Sep 13 00:07:55.296643 containerd[2108]: time="2025-09-13T00:07:55.296359024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:55.299107 containerd[2108]: time="2025-09-13T00:07:55.298803177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:55.299802 containerd[2108]: time="2025-09-13T00:07:55.298835294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:55.301073 containerd[2108]: time="2025-09-13T00:07:55.300364222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:55.487424 containerd[2108]: time="2025-09-13T00:07:55.486795982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8986d45d5-6tw7s,Uid:26c5671e-e9f7-4e86-8b88-1ceb37855ab1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849\"" Sep 13 00:07:55.538051 systemd-networkd[1657]: cali858573d472a: Gained IPv6LL Sep 13 00:07:55.708425 kubelet[3333]: I0913 00:07:55.708325 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:55.829628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630302978.mount: Deactivated successfully. Sep 13 00:07:55.866481 containerd[2108]: time="2025-09-13T00:07:55.866434565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:55.868362 containerd[2108]: time="2025-09-13T00:07:55.868309131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:07:55.869915 containerd[2108]: time="2025-09-13T00:07:55.869877873Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:55.896629 containerd[2108]: time="2025-09-13T00:07:55.895230931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:55.897842 containerd[2108]: time="2025-09-13T00:07:55.897719559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.697538306s" Sep 13 00:07:55.897842 containerd[2108]: time="2025-09-13T00:07:55.897768807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:07:55.905532 containerd[2108]: time="2025-09-13T00:07:55.905488194Z" level=info msg="CreateContainer within sandbox \"20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:07:55.905680 containerd[2108]: time="2025-09-13T00:07:55.905632934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:07:55.923809 systemd-networkd[1657]: cali1af562894e7: Gained IPv6LL Sep 13 00:07:55.935872 containerd[2108]: time="2025-09-13T00:07:55.935678974Z" level=info msg="CreateContainer within sandbox \"20344bfc0660385ccf6215b8c77f2fc18e4c3809fd1177cfd89868a35ac7966b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b771318c12cc99492bf06d42d51f04dd32f903872ed0bf15ad90f23853e5da2d\"" Sep 13 00:07:55.941208 containerd[2108]: time="2025-09-13T00:07:55.939235685Z" level=info msg="StartContainer for \"b771318c12cc99492bf06d42d51f04dd32f903872ed0bf15ad90f23853e5da2d\"" Sep 13 00:07:55.986036 systemd-networkd[1657]: cali06e232f7846: Gained IPv6LL Sep 13 00:07:56.065097 kubelet[3333]: I0913 00:07:56.064950 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f9s78" podStartSLOduration=42.063957658 podStartE2EDuration="42.063957658s" podCreationTimestamp="2025-09-13 00:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:56.061485042 +0000 UTC m=+46.822500931" watchObservedRunningTime="2025-09-13 00:07:56.063957658 +0000 UTC m=+46.824973547" Sep 13 00:07:56.179532 containerd[2108]: time="2025-09-13T00:07:56.179407660Z" level=info msg="StartContainer for \"b771318c12cc99492bf06d42d51f04dd32f903872ed0bf15ad90f23853e5da2d\" returns successfully" Sep 13 00:07:56.241963 systemd-networkd[1657]: cali409a0dc050d: Gained IPv6LL Sep 13 00:07:57.081739 kubelet[3333]: I0913 00:07:57.080933 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-97749c699-2lxv5" podStartSLOduration=2.963821715 podStartE2EDuration="10.080916601s" podCreationTimestamp="2025-09-13 00:07:47 +0000 UTC" firstStartedPulling="2025-09-13 00:07:48.784580568 +0000 UTC m=+39.545596437" lastFinishedPulling="2025-09-13 00:07:55.901675442 +0000 UTC m=+46.662691323" observedRunningTime="2025-09-13 00:07:57.080206942 +0000 UTC m=+47.841222830" watchObservedRunningTime="2025-09-13 00:07:57.080916601 +0000 UTC m=+47.841932522" Sep 13 00:07:58.545399 ntpd[2058]: Listen normally on 6 vxlan.calico 192.168.105.128:123 Sep 13 00:07:58.545489 ntpd[2058]: Listen normally on 7 vxlan.calico [fe80::64de:83ff:fe53:7562%4]:123 Sep 13 00:07:58.546247 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 6 vxlan.calico 192.168.105.128:123 Sep 13 00:07:58.546247 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 7 vxlan.calico [fe80::64de:83ff:fe53:7562%4]:123 Sep 13 00:07:58.546247 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 8 calicc74251078c [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:07:58.546247 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 9 calia1aa6e8ea83 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:07:58.546247 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 10 cali3c0898084dc [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:07:58.546247 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 11 cali6576cfbe4bc [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:07:58.545550 ntpd[2058]: Listen normally on 8 calicc74251078c [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:07:58.546537 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 12 cali858573d472a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:07:58.546537 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 13 cali06e232f7846 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:07:58.546537 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 14 cali1af562894e7 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:07:58.546537 ntpd[2058]: 13 Sep 00:07:58 ntpd[2058]: Listen normally on 15 cali409a0dc050d [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:07:58.545602 ntpd[2058]: Listen normally on 9 calia1aa6e8ea83 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:07:58.545645 ntpd[2058]: Listen normally on 10 cali3c0898084dc [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:07:58.545687 ntpd[2058]: Listen normally on 11 cali6576cfbe4bc [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:07:58.546255 ntpd[2058]: Listen normally on 12 cali858573d472a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:07:58.546342 ntpd[2058]: Listen normally on 13 cali06e232f7846 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:07:58.546385 ntpd[2058]: Listen normally on 14 cali1af562894e7 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:07:58.546424 ntpd[2058]: Listen normally on 15 cali409a0dc050d [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:07:58.681980 containerd[2108]: time="2025-09-13T00:07:58.681929097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:58.684723 containerd[2108]: time="2025-09-13T00:07:58.684656693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:07:58.688404 containerd[2108]: time="2025-09-13T00:07:58.688347304Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:58.694561 containerd[2108]: time="2025-09-13T00:07:58.694103700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:58.701455 containerd[2108]: time="2025-09-13T00:07:58.701417638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.795752713s" Sep 13 00:07:58.701641 containerd[2108]: time="2025-09-13T00:07:58.701625824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:07:58.702689 containerd[2108]: time="2025-09-13T00:07:58.702667500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:07:58.835376 containerd[2108]: time="2025-09-13T00:07:58.835258055Z" level=info msg="CreateContainer within sandbox \"471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:07:58.871547 containerd[2108]: time="2025-09-13T00:07:58.871495938Z" level=info msg="CreateContainer within sandbox \"471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dfb40a8974a4766c4a55689782781538bc0ec712eb656968408df8e1f9698a1d\"" Sep 13 00:07:58.873804 containerd[2108]: time="2025-09-13T00:07:58.873389035Z" level=info msg="StartContainer for \"dfb40a8974a4766c4a55689782781538bc0ec712eb656968408df8e1f9698a1d\"" Sep 13 00:07:58.959476 containerd[2108]: time="2025-09-13T00:07:58.959417700Z" level=info msg="StartContainer for \"dfb40a8974a4766c4a55689782781538bc0ec712eb656968408df8e1f9698a1d\" returns successfully" Sep 13 00:07:59.105465 kubelet[3333]: I0913 00:07:59.105303 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b685ff94f-rk6kg" podStartSLOduration=24.871103851 podStartE2EDuration="31.105283074s" podCreationTimestamp="2025-09-13 00:07:28 +0000 UTC" firstStartedPulling="2025-09-13 00:07:52.468316152 +0000 UTC m=+43.229332031" lastFinishedPulling="2025-09-13 00:07:58.702495385 +0000 UTC m=+49.463511254" observedRunningTime="2025-09-13 00:07:59.10048406 +0000 UTC m=+49.861499950" watchObservedRunningTime="2025-09-13 00:07:59.105283074 +0000 UTC m=+49.866298962" Sep 13 00:07:59.219026 systemd[1]: Started sshd@8-172.31.25.42:22-139.178.89.65:54320.service - OpenSSH per-connection server daemon (139.178.89.65:54320). Sep 13 00:07:59.434318 sshd[6050]: Accepted publickey for core from 139.178.89.65 port 54320 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:59.437819 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:59.447157 systemd-logind[2075]: New session 9 of user core. Sep 13 00:07:59.454095 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:08:00.418824 sshd[6050]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:00.425943 systemd[1]: sshd@8-172.31.25.42:22-139.178.89.65:54320.service: Deactivated successfully. Sep 13 00:08:00.438881 systemd-logind[2075]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:08:00.441908 containerd[2108]: time="2025-09-13T00:08:00.440275860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:00.443574 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:08:00.445457 systemd-logind[2075]: Removed session 9. Sep 13 00:08:00.456718 containerd[2108]: time="2025-09-13T00:08:00.456639078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:08:00.458949 containerd[2108]: time="2025-09-13T00:08:00.458689330Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:00.465553 containerd[2108]: time="2025-09-13T00:08:00.464319190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.761564699s" Sep 13 00:08:00.465553 containerd[2108]: time="2025-09-13T00:08:00.464369825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:08:00.465553 containerd[2108]: time="2025-09-13T00:08:00.464521865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:00.470445 containerd[2108]: time="2025-09-13T00:08:00.468782392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:00.480608 containerd[2108]: time="2025-09-13T00:08:00.480563604Z" level=info msg="CreateContainer within sandbox \"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:08:00.530185 containerd[2108]: time="2025-09-13T00:08:00.530142033Z" level=info msg="CreateContainer within sandbox \"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d276a27bd72806880122904f7545ad75c26dd52400008e18dcef059eeffd61f3\"" Sep 13 00:08:00.530893 containerd[2108]: time="2025-09-13T00:08:00.530863276Z" level=info msg="StartContainer for \"d276a27bd72806880122904f7545ad75c26dd52400008e18dcef059eeffd61f3\"" Sep 13 00:08:00.622110 containerd[2108]: time="2025-09-13T00:08:00.621974991Z" level=info msg="StartContainer for \"d276a27bd72806880122904f7545ad75c26dd52400008e18dcef059eeffd61f3\" returns successfully" Sep 13 00:08:00.724350 systemd[1]: run-containerd-runc-k8s.io-d276a27bd72806880122904f7545ad75c26dd52400008e18dcef059eeffd61f3-runc.7uq4eb.mount: Deactivated successfully. Sep 13 00:08:01.234261 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:01.236807 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:01.234301 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:03.657460 containerd[2108]: time="2025-09-13T00:08:03.657411911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:03.659265 containerd[2108]: time="2025-09-13T00:08:03.658864166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:08:03.660322 containerd[2108]: time="2025-09-13T00:08:03.660151836Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:03.663045 containerd[2108]: time="2025-09-13T00:08:03.662770129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:03.663373 containerd[2108]: time="2025-09-13T00:08:03.663348800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.194423509s" Sep 13 00:08:03.663506 containerd[2108]: time="2025-09-13T00:08:03.663489256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:03.670071 containerd[2108]: time="2025-09-13T00:08:03.670036518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:08:03.691648 containerd[2108]: time="2025-09-13T00:08:03.691542276Z" level=info msg="CreateContainer within sandbox \"d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:03.709273 containerd[2108]: time="2025-09-13T00:08:03.709215761Z" level=info msg="CreateContainer within sandbox \"d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"34f45d39d4d23f5296f540036e27ffeeba869aaf9714a28250cad26e782e5d4a\"" Sep 13 00:08:03.711459 containerd[2108]: time="2025-09-13T00:08:03.711228122Z" level=info msg="StartContainer for \"34f45d39d4d23f5296f540036e27ffeeba869aaf9714a28250cad26e782e5d4a\"" Sep 13 00:08:03.821222 containerd[2108]: time="2025-09-13T00:08:03.821176399Z" level=info msg="StartContainer for \"34f45d39d4d23f5296f540036e27ffeeba869aaf9714a28250cad26e782e5d4a\" returns successfully" Sep 13 00:08:04.338866 kubelet[3333]: I0913 00:08:04.337917 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8986d45d5-vzq9f" podStartSLOduration=31.464330714 podStartE2EDuration="40.336487303s" podCreationTimestamp="2025-09-13 00:07:24 +0000 UTC" firstStartedPulling="2025-09-13 00:07:54.797558678 +0000 UTC m=+45.558574547" lastFinishedPulling="2025-09-13 00:08:03.669715266 +0000 UTC m=+54.430731136" observedRunningTime="2025-09-13 00:08:04.300080552 +0000 UTC m=+55.061096441" watchObservedRunningTime="2025-09-13 00:08:04.336487303 +0000 UTC m=+55.097503189" Sep 13 00:08:05.244764 kubelet[3333]: I0913 00:08:05.244232 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:05.271208 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:05.270011 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:05.270051 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:05.451112 systemd[1]: Started sshd@9-172.31.25.42:22-139.178.89.65:52382.service - OpenSSH per-connection server daemon (139.178.89.65:52382). Sep 13 00:08:05.700930 sshd[6159]: Accepted publickey for core from 139.178.89.65 port 52382 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:05.737361 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:05.760196 systemd-logind[2075]: New session 10 of user core. Sep 13 00:08:05.767455 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:08:06.734202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289084807.mount: Deactivated successfully. Sep 13 00:08:07.217438 sshd[6159]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:07.225153 systemd[1]: sshd@9-172.31.25.42:22-139.178.89.65:52382.service: Deactivated successfully. Sep 13 00:08:07.241942 systemd-logind[2075]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:08:07.263966 systemd[1]: Started sshd@10-172.31.25.42:22-139.178.89.65:52388.service - OpenSSH per-connection server daemon (139.178.89.65:52388). Sep 13 00:08:07.264376 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:08:07.270142 systemd-logind[2075]: Removed session 10. Sep 13 00:08:07.313882 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:07.316472 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:07.313892 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:07.443329 sshd[6180]: Accepted publickey for core from 139.178.89.65 port 52388 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:07.446482 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:07.456657 systemd-logind[2075]: New session 11 of user core. Sep 13 00:08:07.460440 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:08:07.958533 sshd[6180]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:07.979467 systemd[1]: Started sshd@11-172.31.25.42:22-139.178.89.65:52398.service - OpenSSH per-connection server daemon (139.178.89.65:52398). Sep 13 00:08:07.980628 systemd[1]: sshd@10-172.31.25.42:22-139.178.89.65:52388.service: Deactivated successfully. Sep 13 00:08:07.987827 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:08:08.000166 systemd-logind[2075]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:08:08.009654 systemd-logind[2075]: Removed session 11. Sep 13 00:08:08.022149 containerd[2108]: time="2025-09-13T00:08:08.022062579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.025201 containerd[2108]: time="2025-09-13T00:08:08.023932891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:08:08.030457 containerd[2108]: time="2025-09-13T00:08:08.030024130Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.036073 containerd[2108]: time="2025-09-13T00:08:08.036009550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.040028 containerd[2108]: time="2025-09-13T00:08:08.039324545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.369249186s" Sep 13 00:08:08.040028 containerd[2108]: time="2025-09-13T00:08:08.039537623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:08:08.098319 containerd[2108]: time="2025-09-13T00:08:08.098002700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:08.189319 containerd[2108]: time="2025-09-13T00:08:08.189073832Z" level=info msg="CreateContainer within sandbox \"47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:08:08.195907 sshd[6190]: Accepted publickey for core from 139.178.89.65 port 52398 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:08.199497 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:08.213826 systemd-logind[2075]: New session 12 of user core. Sep 13 00:08:08.223524 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:08:08.269283 containerd[2108]: time="2025-09-13T00:08:08.267319906Z" level=info msg="CreateContainer within sandbox \"47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"599081885b706e46315a59d141a1549af6a579e489c3ceb75be75f8ad364de21\"" Sep 13 00:08:08.299815 containerd[2108]: time="2025-09-13T00:08:08.299206856Z" level=info msg="StartContainer for \"599081885b706e46315a59d141a1549af6a579e489c3ceb75be75f8ad364de21\"" Sep 13 00:08:08.457583 containerd[2108]: time="2025-09-13T00:08:08.456548192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:08.460836 containerd[2108]: time="2025-09-13T00:08:08.460744395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:08:08.465662 containerd[2108]: time="2025-09-13T00:08:08.464971994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 366.928545ms" Sep 13 00:08:08.465662 containerd[2108]: time="2025-09-13T00:08:08.465026488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:08.466471 containerd[2108]: time="2025-09-13T00:08:08.466426238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:08:08.482772 containerd[2108]: time="2025-09-13T00:08:08.480479971Z" level=info msg="CreateContainer within sandbox \"efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:08.564115 containerd[2108]: time="2025-09-13T00:08:08.564070585Z" level=info msg="CreateContainer within sandbox \"efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"69e4cf42e4ca5e0af434092d56ff8cfed4fcfbfc258366926920ee24f5f41f31\"" Sep 13 00:08:08.568667 containerd[2108]: time="2025-09-13T00:08:08.566995252Z" level=info msg="StartContainer for \"69e4cf42e4ca5e0af434092d56ff8cfed4fcfbfc258366926920ee24f5f41f31\"" Sep 13 00:08:08.848513 containerd[2108]: time="2025-09-13T00:08:08.848385711Z" level=info msg="StartContainer for \"599081885b706e46315a59d141a1549af6a579e489c3ceb75be75f8ad364de21\" returns successfully" Sep 13 00:08:08.880629 sshd[6190]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:08.890610 systemd[1]: sshd@11-172.31.25.42:22-139.178.89.65:52398.service: Deactivated successfully. Sep 13 00:08:08.915201 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:08:08.920741 systemd-logind[2075]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:08:08.927170 systemd-logind[2075]: Removed session 12. Sep 13 00:08:08.941021 containerd[2108]: time="2025-09-13T00:08:08.940246462Z" level=info msg="StartContainer for \"69e4cf42e4ca5e0af434092d56ff8cfed4fcfbfc258366926920ee24f5f41f31\" returns successfully" Sep 13 00:08:09.230478 systemd[1]: run-containerd-runc-k8s.io-599081885b706e46315a59d141a1549af6a579e489c3ceb75be75f8ad364de21-runc.FPj0YJ.mount: Deactivated successfully. Sep 13 00:08:09.364145 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:09.364773 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:09.364170 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:09.681195 containerd[2108]: time="2025-09-13T00:08:09.681155482Z" level=info msg="StopPodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\"" Sep 13 00:08:09.762848 kubelet[3333]: I0913 00:08:09.752151 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8986d45d5-6tw7s" podStartSLOduration=32.759991228 podStartE2EDuration="45.73560612s" podCreationTimestamp="2025-09-13 00:07:24 +0000 UTC" firstStartedPulling="2025-09-13 00:07:55.490578686 +0000 UTC m=+46.251594556" lastFinishedPulling="2025-09-13 00:08:08.466193574 +0000 UTC m=+59.227209448" observedRunningTime="2025-09-13 00:08:09.704156277 +0000 UTC m=+60.465172171" watchObservedRunningTime="2025-09-13 00:08:09.73560612 +0000 UTC m=+60.496622005" Sep 13 00:08:09.763530 kubelet[3333]: I0913 00:08:09.762991 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-zhk7w" podStartSLOduration=28.839709294 podStartE2EDuration="41.762966089s" podCreationTimestamp="2025-09-13 00:07:28 +0000 UTC" firstStartedPulling="2025-09-13 00:07:55.14830493 +0000 UTC m=+45.909320815" lastFinishedPulling="2025-09-13 00:08:08.071561732 +0000 UTC m=+58.832577610" observedRunningTime="2025-09-13 00:08:09.732221058 +0000 UTC m=+60.493236948" watchObservedRunningTime="2025-09-13 00:08:09.762966089 +0000 UTC m=+60.523981978" Sep 13 00:08:10.355593 kubelet[3333]: I0913 00:08:10.355519 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:11.066183 containerd[2108]: time="2025-09-13T00:08:11.064978624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:11.075248 containerd[2108]: time="2025-09-13T00:08:11.075094436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:08:11.089385 containerd[2108]: time="2025-09-13T00:08:11.088844906Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:11.104954 containerd[2108]: time="2025-09-13T00:08:11.104432388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:11.105417 containerd[2108]: time="2025-09-13T00:08:11.105373809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.63890468s" Sep 13 00:08:11.105530 containerd[2108]: time="2025-09-13T00:08:11.105424986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:08:11.411995 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:11.411013 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:11.412527 containerd[2108]: time="2025-09-13T00:08:11.411668408Z" level=info msg="CreateContainer within sandbox \"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:08:11.411045 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:10.904 [WARNING][6306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"270d8e62-8736-4a5e-8bc3-f2ede76f3e76", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c", Pod:"goldmane-7988f88666-zhk7w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1af562894e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:10.940 [INFO][6306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:10.940 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" iface="eth0" netns="" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:10.940 [INFO][6306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:10.940 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.554 [INFO][6340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.562 [INFO][6340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.566 [INFO][6340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.599 [WARNING][6340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.599 [INFO][6340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.602 [INFO][6340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:11.613868 containerd[2108]: 2025-09-13 00:08:11.608 [INFO][6306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:11.659188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227321248.mount: Deactivated successfully. Sep 13 00:08:11.679096 containerd[2108]: time="2025-09-13T00:08:11.678929833Z" level=info msg="TearDown network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" successfully" Sep 13 00:08:11.679096 containerd[2108]: time="2025-09-13T00:08:11.678970704Z" level=info msg="StopPodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" returns successfully" Sep 13 00:08:11.753913 containerd[2108]: time="2025-09-13T00:08:11.753764241Z" level=info msg="CreateContainer within sandbox \"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7cd065dc90a6c88f1c9f5adb5ec7c6df313a6198d44f7bdc95563d23adf3c161\"" Sep 13 00:08:11.756758 containerd[2108]: time="2025-09-13T00:08:11.755742921Z" level=info msg="StartContainer for \"7cd065dc90a6c88f1c9f5adb5ec7c6df313a6198d44f7bdc95563d23adf3c161\"" Sep 13 00:08:11.956222 containerd[2108]: time="2025-09-13T00:08:11.955915670Z" level=info msg="RemovePodSandbox for \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\"" Sep 13 00:08:11.963535 containerd[2108]: time="2025-09-13T00:08:11.963151108Z" level=info msg="Forcibly stopping sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\"" Sep 13 00:08:12.002023 containerd[2108]: time="2025-09-13T00:08:12.001989936Z" level=info msg="StartContainer for \"7cd065dc90a6c88f1c9f5adb5ec7c6df313a6198d44f7bdc95563d23adf3c161\" returns successfully" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.090 [WARNING][6418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"270d8e62-8736-4a5e-8bc3-f2ede76f3e76", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"47fe458ebdd92c447100ece2ebe2357ede96608962ccb5db0bc6746da3e9d57c", Pod:"goldmane-7988f88666-zhk7w", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1af562894e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.090 [INFO][6418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.090 [INFO][6418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" iface="eth0" netns="" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.090 [INFO][6418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.090 [INFO][6418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.115 [INFO][6425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.115 [INFO][6425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.116 [INFO][6425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.122 [WARNING][6425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.122 [INFO][6425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" HandleID="k8s-pod-network.4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Workload="ip--172--31--25--42-k8s-goldmane--7988f88666--zhk7w-eth0" Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.124 [INFO][6425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.131046 containerd[2108]: 2025-09-13 00:08:12.128 [INFO][6418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e" Sep 13 00:08:12.134882 containerd[2108]: time="2025-09-13T00:08:12.131113711Z" level=info msg="TearDown network for sandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" successfully" Sep 13 00:08:12.157819 containerd[2108]: time="2025-09-13T00:08:12.157766750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:12.166823 containerd[2108]: time="2025-09-13T00:08:12.166769535Z" level=info msg="RemovePodSandbox \"4ef35753bdd220ded158304661aa4d427654e999dd2c2c5cfba19433354bbb3e\" returns successfully" Sep 13 00:08:12.201785 containerd[2108]: time="2025-09-13T00:08:12.201746767Z" level=info msg="StopPodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\"" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.244 [WARNING][6441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ece05fb-73e6-4e68-ab11-712551293c2d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658", Pod:"coredns-7c65d6cfc9-zkt4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0898084dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.244 [INFO][6441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.244 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" iface="eth0" netns="" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.244 [INFO][6441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.244 [INFO][6441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.268 [INFO][6448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.269 [INFO][6448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.269 [INFO][6448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.275 [WARNING][6448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.275 [INFO][6448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.277 [INFO][6448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.281433 containerd[2108]: 2025-09-13 00:08:12.279 [INFO][6441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.281433 containerd[2108]: time="2025-09-13T00:08:12.281410336Z" level=info msg="TearDown network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" successfully" Sep 13 00:08:12.281433 containerd[2108]: time="2025-09-13T00:08:12.281431870Z" level=info msg="StopPodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" returns successfully" Sep 13 00:08:12.283452 containerd[2108]: time="2025-09-13T00:08:12.282598846Z" level=info msg="RemovePodSandbox for \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\"" Sep 13 00:08:12.283452 containerd[2108]: time="2025-09-13T00:08:12.282629477Z" level=info msg="Forcibly stopping sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\"" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.319 [WARNING][6462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ece05fb-73e6-4e68-ab11-712551293c2d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"ff8ffa6f2c84e5d3bec37b711cb983d051f7043f05e2d6e9b1c2ab5ccb812658", Pod:"coredns-7c65d6cfc9-zkt4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c0898084dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.319 [INFO][6462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.319 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" iface="eth0" netns="" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.319 [INFO][6462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.319 [INFO][6462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.345 [INFO][6469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.345 [INFO][6469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.345 [INFO][6469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.351 [WARNING][6469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.351 [INFO][6469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" HandleID="k8s-pod-network.05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--zkt4z-eth0" Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.353 [INFO][6469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.358419 containerd[2108]: 2025-09-13 00:08:12.355 [INFO][6462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7" Sep 13 00:08:12.359290 containerd[2108]: time="2025-09-13T00:08:12.358464478Z" level=info msg="TearDown network for sandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" successfully" Sep 13 00:08:12.365869 containerd[2108]: time="2025-09-13T00:08:12.365687338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:12.365869 containerd[2108]: time="2025-09-13T00:08:12.365779779Z" level=info msg="RemovePodSandbox \"05485f44237f061edc6199b0646bd36b77e86ad95a1119f34c5a21856ee6b6b7\" returns successfully" Sep 13 00:08:12.366551 containerd[2108]: time="2025-09-13T00:08:12.366283788Z" level=info msg="StopPodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\"" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.403 [WARNING][6483] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3735d737-0352-445e-b2c0-da8688517912", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd", Pod:"csi-node-driver-6ztb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6576cfbe4bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.403 [INFO][6483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.403 [INFO][6483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" iface="eth0" netns="" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.403 [INFO][6483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.403 [INFO][6483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.435 [INFO][6490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.436 [INFO][6490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.436 [INFO][6490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.441 [WARNING][6490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.442 [INFO][6490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.443 [INFO][6490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.447806 containerd[2108]: 2025-09-13 00:08:12.445 [INFO][6483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.449077 containerd[2108]: time="2025-09-13T00:08:12.447862091Z" level=info msg="TearDown network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" successfully" Sep 13 00:08:12.449077 containerd[2108]: time="2025-09-13T00:08:12.447884265Z" level=info msg="StopPodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" returns successfully" Sep 13 00:08:12.449077 containerd[2108]: time="2025-09-13T00:08:12.448500412Z" level=info msg="RemovePodSandbox for \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\"" Sep 13 00:08:12.449077 containerd[2108]: time="2025-09-13T00:08:12.448526234Z" level=info msg="Forcibly stopping sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\"" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.524 [WARNING][6504] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3735d737-0352-445e-b2c0-da8688517912", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"27cdb7c63c0e1cfdc30e8dbd7bc93abbec025a8f99fc57723b6b7755f3e8c2dd", Pod:"csi-node-driver-6ztb2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6576cfbe4bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.524 [INFO][6504] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.524 [INFO][6504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" iface="eth0" netns="" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.524 [INFO][6504] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.524 [INFO][6504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.562 [INFO][6511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.563 [INFO][6511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.563 [INFO][6511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.571 [WARNING][6511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.571 [INFO][6511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" HandleID="k8s-pod-network.abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Workload="ip--172--31--25--42-k8s-csi--node--driver--6ztb2-eth0" Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.574 [INFO][6511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.579970 containerd[2108]: 2025-09-13 00:08:12.577 [INFO][6504] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49" Sep 13 00:08:12.579970 containerd[2108]: time="2025-09-13T00:08:12.579825571Z" level=info msg="TearDown network for sandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" successfully" Sep 13 00:08:12.590034 containerd[2108]: time="2025-09-13T00:08:12.589803293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:12.590034 containerd[2108]: time="2025-09-13T00:08:12.589901671Z" level=info msg="RemovePodSandbox \"abd894fc8455b6646ed35ad2c3c8b918df9c3939e9e3500dbb26fd6d6c867c49\" returns successfully" Sep 13 00:08:12.612459 containerd[2108]: time="2025-09-13T00:08:12.612417101Z" level=info msg="StopPodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\"" Sep 13 00:08:12.628248 systemd[1]: run-containerd-runc-k8s.io-7cd065dc90a6c88f1c9f5adb5ec7c6df313a6198d44f7bdc95563d23adf3c161-runc.pPTDr3.mount: Deactivated successfully. Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.674 [WARNING][6525] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f6b1d66d-bda4-482c-8530-6567389b0a59", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691", Pod:"coredns-7c65d6cfc9-f9s78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali858573d472a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.675 [INFO][6525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.675 [INFO][6525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" iface="eth0" netns="" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.675 [INFO][6525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.675 [INFO][6525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.711 [INFO][6532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.711 [INFO][6532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.711 [INFO][6532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.719 [WARNING][6532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.719 [INFO][6532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.722 [INFO][6532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.727429 containerd[2108]: 2025-09-13 00:08:12.724 [INFO][6525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.728634 containerd[2108]: time="2025-09-13T00:08:12.727999882Z" level=info msg="TearDown network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" successfully" Sep 13 00:08:12.728634 containerd[2108]: time="2025-09-13T00:08:12.728033542Z" level=info msg="StopPodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" returns successfully" Sep 13 00:08:12.738373 containerd[2108]: time="2025-09-13T00:08:12.737908200Z" level=info msg="RemovePodSandbox for \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\"" Sep 13 00:08:12.738373 containerd[2108]: time="2025-09-13T00:08:12.737963263Z" level=info msg="Forcibly stopping sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\"" Sep 13 00:08:12.821569 kubelet[3333]: I0913 00:08:12.817030 3333 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:08:12.827456 kubelet[3333]: I0913 00:08:12.827372 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6ztb2" podStartSLOduration=28.313472528 podStartE2EDuration="44.826442968s" podCreationTimestamp="2025-09-13 00:07:28 +0000 UTC" firstStartedPulling="2025-09-13 00:07:54.638987896 +0000 UTC m=+45.400003766" lastFinishedPulling="2025-09-13 00:08:11.151958326 +0000 UTC m=+61.912974206" observedRunningTime="2025-09-13 00:08:12.791519826 +0000 UTC m=+63.552535721" watchObservedRunningTime="2025-09-13 00:08:12.826442968 +0000 UTC m=+63.587458856" Sep 13 00:08:12.828999 kubelet[3333]: I0913 00:08:12.828803 3333 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.820 [WARNING][6547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f6b1d66d-bda4-482c-8530-6567389b0a59", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"f1163b62aaf18b8300b45ee8d01a841a3e8201fad71ed05782d49f9603878691", Pod:"coredns-7c65d6cfc9-f9s78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali858573d472a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.822 [INFO][6547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.822 [INFO][6547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" iface="eth0" netns="" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.822 [INFO][6547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.822 [INFO][6547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.855 [INFO][6554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.855 [INFO][6554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.855 [INFO][6554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.873 [WARNING][6554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.873 [INFO][6554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" HandleID="k8s-pod-network.624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Workload="ip--172--31--25--42-k8s-coredns--7c65d6cfc9--f9s78-eth0" Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.876 [INFO][6554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.886466 containerd[2108]: 2025-09-13 00:08:12.880 [INFO][6547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f" Sep 13 00:08:12.886466 containerd[2108]: time="2025-09-13T00:08:12.883123790Z" level=info msg="TearDown network for sandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" successfully" Sep 13 00:08:12.904062 containerd[2108]: time="2025-09-13T00:08:12.903742453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:12.904062 containerd[2108]: time="2025-09-13T00:08:12.903830023Z" level=info msg="RemovePodSandbox \"624b726afc63114e40e49988dff29783e38625e4c108df09135f223a73201b6f\" returns successfully" Sep 13 00:08:12.904584 containerd[2108]: time="2025-09-13T00:08:12.904554512Z" level=info msg="StopPodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\"" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.949 [WARNING][6569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"26c5671e-e9f7-4e86-8b88-1ceb37855ab1", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849", Pod:"calico-apiserver-8986d45d5-6tw7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali409a0dc050d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.949 [INFO][6569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.949 [INFO][6569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" iface="eth0" netns="" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.949 [INFO][6569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.949 [INFO][6569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.990 [INFO][6576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.991 [INFO][6576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.991 [INFO][6576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.998 [WARNING][6576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.998 [INFO][6576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:12.999 [INFO][6576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.004060 containerd[2108]: 2025-09-13 00:08:13.001 [INFO][6569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.004060 containerd[2108]: time="2025-09-13T00:08:13.003932905Z" level=info msg="TearDown network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" successfully" Sep 13 00:08:13.004060 containerd[2108]: time="2025-09-13T00:08:13.003958893Z" level=info msg="StopPodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" returns successfully" Sep 13 00:08:13.005614 containerd[2108]: time="2025-09-13T00:08:13.004461305Z" level=info msg="RemovePodSandbox for \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\"" Sep 13 00:08:13.005614 containerd[2108]: time="2025-09-13T00:08:13.004490557Z" level=info msg="Forcibly stopping sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\"" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.049 [WARNING][6590] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"26c5671e-e9f7-4e86-8b88-1ceb37855ab1", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"efbe18063ca1bdd288dcda179ea5a675592e32dbc6b8b1488881d39a2ead2849", Pod:"calico-apiserver-8986d45d5-6tw7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali409a0dc050d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.049 [INFO][6590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.049 [INFO][6590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" iface="eth0" netns="" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.049 [INFO][6590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.049 [INFO][6590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.079 [INFO][6597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.079 [INFO][6597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.079 [INFO][6597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.086 [WARNING][6597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.086 [INFO][6597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" HandleID="k8s-pod-network.5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--6tw7s-eth0" Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.088 [INFO][6597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.092698 containerd[2108]: 2025-09-13 00:08:13.090 [INFO][6590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906" Sep 13 00:08:13.094128 containerd[2108]: time="2025-09-13T00:08:13.092818915Z" level=info msg="TearDown network for sandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" successfully" Sep 13 00:08:13.102251 containerd[2108]: time="2025-09-13T00:08:13.102201713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:13.103101 containerd[2108]: time="2025-09-13T00:08:13.102289588Z" level=info msg="RemovePodSandbox \"5bcf340ca11bcae6012eeed84882305841e6458d21a8571dc841ed457e99d906\" returns successfully" Sep 13 00:08:13.106681 containerd[2108]: time="2025-09-13T00:08:13.106635913Z" level=info msg="StopPodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\"" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.143 [WARNING][6611] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4473fa4-a590-42c7-aa32-c3ce2e18df44", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a", Pod:"calico-apiserver-8986d45d5-vzq9f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06e232f7846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.143 [INFO][6611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.143 [INFO][6611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" iface="eth0" netns="" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.143 [INFO][6611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.143 [INFO][6611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.169 [INFO][6618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.169 [INFO][6618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.169 [INFO][6618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.176 [WARNING][6618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.176 [INFO][6618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.178 [INFO][6618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.182205 containerd[2108]: 2025-09-13 00:08:13.180 [INFO][6611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.183901 containerd[2108]: time="2025-09-13T00:08:13.182243943Z" level=info msg="TearDown network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" successfully" Sep 13 00:08:13.183901 containerd[2108]: time="2025-09-13T00:08:13.182280074Z" level=info msg="StopPodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" returns successfully" Sep 13 00:08:13.183901 containerd[2108]: time="2025-09-13T00:08:13.182859963Z" level=info msg="RemovePodSandbox for \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\"" Sep 13 00:08:13.183901 containerd[2108]: time="2025-09-13T00:08:13.182900120Z" level=info msg="Forcibly stopping sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\"" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.233 [WARNING][6632] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0", GenerateName:"calico-apiserver-8986d45d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4473fa4-a590-42c7-aa32-c3ce2e18df44", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8986d45d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"d066b54c2396fb551291f6ffd379de4b32d263672309a7b5a1d007c42da0bf2a", Pod:"calico-apiserver-8986d45d5-vzq9f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06e232f7846", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.233 [INFO][6632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.233 [INFO][6632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" iface="eth0" netns="" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.234 [INFO][6632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.234 [INFO][6632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.267 [INFO][6639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.267 [INFO][6639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.267 [INFO][6639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.276 [WARNING][6639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.276 [INFO][6639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" HandleID="k8s-pod-network.968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Workload="ip--172--31--25--42-k8s-calico--apiserver--8986d45d5--vzq9f-eth0" Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.278 [INFO][6639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.324222 containerd[2108]: 2025-09-13 00:08:13.281 [INFO][6632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7" Sep 13 00:08:13.325297 containerd[2108]: time="2025-09-13T00:08:13.324416541Z" level=info msg="TearDown network for sandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" successfully" Sep 13 00:08:13.333143 containerd[2108]: time="2025-09-13T00:08:13.333104406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:13.333468 containerd[2108]: time="2025-09-13T00:08:13.333356119Z" level=info msg="RemovePodSandbox \"968881f8be863dab9a6c999f407e388ee0390e4e0e0a7aafda5cd89fec8d50a7\" returns successfully" Sep 13 00:08:13.337658 containerd[2108]: time="2025-09-13T00:08:13.337586853Z" level=info msg="StopPodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\"" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.377 [WARNING][6653] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.377 [INFO][6653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.377 [INFO][6653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" iface="eth0" netns="" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.377 [INFO][6653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.377 [INFO][6653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.408 [INFO][6660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.408 [INFO][6660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.408 [INFO][6660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.417 [WARNING][6660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.417 [INFO][6660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.419 [INFO][6660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.425750 containerd[2108]: 2025-09-13 00:08:13.423 [INFO][6653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.425750 containerd[2108]: time="2025-09-13T00:08:13.425617981Z" level=info msg="TearDown network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" successfully" Sep 13 00:08:13.425750 containerd[2108]: time="2025-09-13T00:08:13.425640797Z" level=info msg="StopPodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" returns successfully" Sep 13 00:08:13.426652 containerd[2108]: time="2025-09-13T00:08:13.426613216Z" level=info msg="RemovePodSandbox for \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\"" Sep 13 00:08:13.426652 containerd[2108]: time="2025-09-13T00:08:13.426649253Z" level=info msg="Forcibly stopping sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\"" Sep 13 00:08:13.457966 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:13.457975 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:13.460722 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.475 [WARNING][6674] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" WorkloadEndpoint="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.475 [INFO][6674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.476 [INFO][6674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" iface="eth0" netns="" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.476 [INFO][6674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.476 [INFO][6674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.498 [INFO][6681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.499 [INFO][6681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.499 [INFO][6681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.505 [WARNING][6681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.505 [INFO][6681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" HandleID="k8s-pod-network.e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Workload="ip--172--31--25--42-k8s-whisker--94676679b--f8tw9-eth0" Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.507 [INFO][6681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.511454 containerd[2108]: 2025-09-13 00:08:13.509 [INFO][6674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6" Sep 13 00:08:13.512899 containerd[2108]: time="2025-09-13T00:08:13.511478163Z" level=info msg="TearDown network for sandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" successfully" Sep 13 00:08:13.534519 containerd[2108]: time="2025-09-13T00:08:13.534459122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:13.535376 containerd[2108]: time="2025-09-13T00:08:13.534542111Z" level=info msg="RemovePodSandbox \"e246ea39be71fbf4cbc6789c52b18fed98d5bfc8f076dc4d16559f7c059963e6\" returns successfully" Sep 13 00:08:13.535376 containerd[2108]: time="2025-09-13T00:08:13.535081040Z" level=info msg="StopPodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\"" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.577 [WARNING][6695] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0", GenerateName:"calico-kube-controllers-6b685ff94f-", Namespace:"calico-system", SelfLink:"", UID:"58082458-d541-4695-8472-c49eaa5420d4", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b685ff94f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c", Pod:"calico-kube-controllers-6b685ff94f-rk6kg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1aa6e8ea83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.578 [INFO][6695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.578 [INFO][6695] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" iface="eth0" netns="" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.578 [INFO][6695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.578 [INFO][6695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.606 [INFO][6703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.606 [INFO][6703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.606 [INFO][6703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.613 [WARNING][6703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.613 [INFO][6703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.615 [INFO][6703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.619125 containerd[2108]: 2025-09-13 00:08:13.617 [INFO][6695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.619922 containerd[2108]: time="2025-09-13T00:08:13.619777402Z" level=info msg="TearDown network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" successfully" Sep 13 00:08:13.619922 containerd[2108]: time="2025-09-13T00:08:13.619809476Z" level=info msg="StopPodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" returns successfully" Sep 13 00:08:13.620332 containerd[2108]: time="2025-09-13T00:08:13.620292233Z" level=info msg="RemovePodSandbox for \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\"" Sep 13 00:08:13.620417 containerd[2108]: time="2025-09-13T00:08:13.620330847Z" level=info msg="Forcibly stopping sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\"" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.664 [WARNING][6717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0", GenerateName:"calico-kube-controllers-6b685ff94f-", Namespace:"calico-system", SelfLink:"", UID:"58082458-d541-4695-8472-c49eaa5420d4", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b685ff94f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-42", ContainerID:"471da687d32098d525ca241bc8d7c226eda05dfdbf9b2b3a94ea8c7ddbe8a99c", Pod:"calico-kube-controllers-6b685ff94f-rk6kg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia1aa6e8ea83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.665 [INFO][6717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.665 [INFO][6717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" iface="eth0" netns="" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.665 [INFO][6717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.665 [INFO][6717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.689 [INFO][6724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.690 [INFO][6724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.690 [INFO][6724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.696 [WARNING][6724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.696 [INFO][6724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" HandleID="k8s-pod-network.a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Workload="ip--172--31--25--42-k8s-calico--kube--controllers--6b685ff94f--rk6kg-eth0" Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.697 [INFO][6724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.701997 containerd[2108]: 2025-09-13 00:08:13.699 [INFO][6717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1" Sep 13 00:08:13.701997 containerd[2108]: time="2025-09-13T00:08:13.701842893Z" level=info msg="TearDown network for sandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" successfully" Sep 13 00:08:13.710064 containerd[2108]: time="2025-09-13T00:08:13.709896714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:13.710064 containerd[2108]: time="2025-09-13T00:08:13.709988746Z" level=info msg="RemovePodSandbox \"a4d306d0d5278085a70c37550945800535e931a575f7e5152da9a9c96415fde1\" returns successfully" Sep 13 00:08:13.913073 systemd[1]: Started sshd@12-172.31.25.42:22-139.178.89.65:37040.service - OpenSSH per-connection server daemon (139.178.89.65:37040). Sep 13 00:08:14.178425 sshd[6735]: Accepted publickey for core from 139.178.89.65 port 37040 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:14.181160 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:14.186555 systemd-logind[2075]: New session 13 of user core. Sep 13 00:08:14.191280 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:08:15.376617 sshd[6735]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:15.381921 systemd-logind[2075]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:08:15.382060 systemd[1]: sshd@12-172.31.25.42:22-139.178.89.65:37040.service: Deactivated successfully. Sep 13 00:08:15.389536 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:08:15.390859 systemd-logind[2075]: Removed session 13. Sep 13 00:08:15.505999 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:15.506006 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:15.507730 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:20.390972 systemd[1]: run-containerd-runc-k8s.io-dfb40a8974a4766c4a55689782781538bc0ec712eb656968408df8e1f9698a1d-runc.Z4hQ8c.mount: Deactivated successfully. Sep 13 00:08:20.410117 systemd[1]: Started sshd@13-172.31.25.42:22-139.178.89.65:51892.service - OpenSSH per-connection server daemon (139.178.89.65:51892). Sep 13 00:08:20.648809 sshd[6789]: Accepted publickey for core from 139.178.89.65 port 51892 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:20.652332 sshd[6789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:20.657782 systemd-logind[2075]: New session 14 of user core. Sep 13 00:08:20.666039 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:08:21.271043 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:21.266661 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:21.266696 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:21.469668 sshd[6789]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:21.472699 systemd[1]: sshd@13-172.31.25.42:22-139.178.89.65:51892.service: Deactivated successfully. Sep 13 00:08:21.476302 systemd-logind[2075]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:08:21.477382 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:08:21.479691 systemd-logind[2075]: Removed session 14. Sep 13 00:08:22.181218 kubelet[3333]: I0913 00:08:22.181168 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:25.432521 kubelet[3333]: I0913 00:08:25.432266 3333 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:26.498130 systemd[1]: Started sshd@14-172.31.25.42:22-139.178.89.65:51896.service - OpenSSH per-connection server daemon (139.178.89.65:51896). Sep 13 00:08:26.718410 sshd[6831]: Accepted publickey for core from 139.178.89.65 port 51896 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:26.720860 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:26.726878 systemd-logind[2075]: New session 15 of user core. Sep 13 00:08:26.731323 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:08:27.281816 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:27.284006 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:27.281843 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:27.446273 sshd[6831]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:27.452101 systemd[1]: sshd@14-172.31.25.42:22-139.178.89.65:51896.service: Deactivated successfully. Sep 13 00:08:27.458847 systemd-logind[2075]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:08:27.459400 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:08:27.461460 systemd-logind[2075]: Removed session 15. Sep 13 00:08:27.474960 systemd[1]: Started sshd@15-172.31.25.42:22-139.178.89.65:51904.service - OpenSSH per-connection server daemon (139.178.89.65:51904). Sep 13 00:08:27.632213 sshd[6845]: Accepted publickey for core from 139.178.89.65 port 51904 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:27.636286 sshd[6845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:27.647460 systemd-logind[2075]: New session 16 of user core. Sep 13 00:08:27.653997 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:08:28.447524 sshd[6845]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:28.458847 systemd[1]: sshd@15-172.31.25.42:22-139.178.89.65:51904.service: Deactivated successfully. Sep 13 00:08:28.461808 systemd-logind[2075]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:08:28.461849 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:08:28.463509 systemd-logind[2075]: Removed session 16. Sep 13 00:08:28.476220 systemd[1]: Started sshd@16-172.31.25.42:22-139.178.89.65:51912.service - OpenSSH per-connection server daemon (139.178.89.65:51912). Sep 13 00:08:28.665228 sshd[6863]: Accepted publickey for core from 139.178.89.65 port 51912 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:28.666912 sshd[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:28.674438 systemd-logind[2075]: New session 17 of user core. Sep 13 00:08:28.683153 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:08:29.331428 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:29.333238 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:29.331438 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:31.383186 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:31.377816 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:31.377827 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:33.472933 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:33.481453 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:33.481523 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:33.930323 sshd[6863]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:33.975511 systemd[1]: Started sshd@17-172.31.25.42:22-139.178.89.65:37788.service - OpenSSH per-connection server daemon (139.178.89.65:37788). Sep 13 00:08:34.038192 systemd[1]: sshd@16-172.31.25.42:22-139.178.89.65:51912.service: Deactivated successfully. Sep 13 00:08:34.059693 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:08:34.062377 systemd-logind[2075]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:08:34.078210 systemd-logind[2075]: Removed session 17. Sep 13 00:08:34.713325 sshd[6877]: Accepted publickey for core from 139.178.89.65 port 37788 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:34.724094 sshd[6877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:34.743912 systemd-logind[2075]: New session 18 of user core. Sep 13 00:08:34.752098 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:08:35.473836 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:35.529339 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:35.473845 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:37.560060 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:37.559911 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:37.559927 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:38.117961 kubelet[3333]: E0913 00:08:38.086358 3333 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.199s" Sep 13 00:08:39.610851 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:39.598763 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:39.601787 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:41.639759 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:41.618872 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:41.618895 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:43.706462 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:43.704182 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:43.704201 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:45.718240 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:45.715780 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:45.715790 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:46.032804 kubelet[3333]: E0913 00:08:46.021737 3333 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.454s" Sep 13 00:08:47.359105 sshd[6877]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:47.429227 systemd[1]: Started sshd@18-172.31.25.42:22-139.178.89.65:58846.service - OpenSSH per-connection server daemon (139.178.89.65:58846). Sep 13 00:08:47.449175 systemd[1]: sshd@17-172.31.25.42:22-139.178.89.65:37788.service: Deactivated successfully. Sep 13 00:08:47.473054 systemd-logind[2075]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:08:47.475051 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:08:47.482745 systemd-logind[2075]: Removed session 18. Sep 13 00:08:47.769826 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:47.769255 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:47.769268 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:47.918728 sshd[6959]: Accepted publickey for core from 139.178.89.65 port 58846 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:47.921004 sshd[6959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:47.965988 systemd-logind[2075]: New session 19 of user core. Sep 13 00:08:47.973584 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:08:48.953463 systemd[1]: run-containerd-runc-k8s.io-dfb40a8974a4766c4a55689782781538bc0ec712eb656968408df8e1f9698a1d-runc.QFM0Wv.mount: Deactivated successfully. Sep 13 00:08:49.811767 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:49.809811 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:49.809848 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:50.733906 sshd[6959]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:50.749955 systemd[1]: sshd@18-172.31.25.42:22-139.178.89.65:58846.service: Deactivated successfully. Sep 13 00:08:50.761333 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:08:50.764780 systemd-logind[2075]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:08:50.775848 systemd-logind[2075]: Removed session 19. Sep 13 00:08:51.861904 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:51.857962 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:51.857972 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:55.773323 systemd[1]: Started sshd@19-172.31.25.42:22-139.178.89.65:45190.service - OpenSSH per-connection server daemon (139.178.89.65:45190). Sep 13 00:08:55.806390 systemd[1]: run-containerd-runc-k8s.io-adcbf58e85eae02a58ff62fc323350e1a086f6770563cc66ef9e195bf585120d-runc.iDIogI.mount: Deactivated successfully. Sep 13 00:08:56.051799 sshd[6998]: Accepted publickey for core from 139.178.89.65 port 45190 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:56.053404 sshd[6998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:56.062272 systemd-logind[2075]: New session 20 of user core. Sep 13 00:08:56.068438 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:08:57.302094 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:08:57.301596 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:08:57.301606 systemd-resolved[1989]: Flushed all caches. Sep 13 00:08:57.857529 sshd[6998]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:57.864422 systemd-logind[2075]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:08:57.870999 systemd[1]: sshd@19-172.31.25.42:22-139.178.89.65:45190.service: Deactivated successfully. Sep 13 00:08:57.880987 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:08:57.883046 systemd-logind[2075]: Removed session 20. Sep 13 00:09:02.899185 systemd[1]: Started sshd@20-172.31.25.42:22-139.178.89.65:54480.service - OpenSSH per-connection server daemon (139.178.89.65:54480). Sep 13 00:09:03.236422 sshd[7047]: Accepted publickey for core from 139.178.89.65 port 54480 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:03.238561 sshd[7047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:03.252174 systemd-logind[2075]: New session 21 of user core. Sep 13 00:09:03.261074 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:09:03.321888 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:09:03.314120 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:09:03.314167 systemd-resolved[1989]: Flushed all caches. Sep 13 00:09:04.909123 sshd[7047]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:04.915138 systemd-logind[2075]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:09:04.916482 systemd[1]: sshd@20-172.31.25.42:22-139.178.89.65:54480.service: Deactivated successfully. Sep 13 00:09:04.935438 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:09:04.937287 systemd-logind[2075]: Removed session 21. Sep 13 00:09:09.329820 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:09:09.334156 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:09:09.329831 systemd-resolved[1989]: Flushed all caches. Sep 13 00:09:09.951009 systemd[1]: Started sshd@21-172.31.25.42:22-139.178.89.65:52860.service - OpenSSH per-connection server daemon (139.178.89.65:52860). Sep 13 00:09:10.348799 sshd[7093]: Accepted publickey for core from 139.178.89.65 port 52860 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:10.354564 sshd[7093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:10.373581 systemd-logind[2075]: New session 22 of user core. Sep 13 00:09:10.382172 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:09:11.380853 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:09:11.377977 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:09:11.378016 systemd-resolved[1989]: Flushed all caches. Sep 13 00:09:12.001867 sshd[7093]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:12.013241 systemd[1]: sshd@21-172.31.25.42:22-139.178.89.65:52860.service: Deactivated successfully. Sep 13 00:09:12.030566 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:09:12.033946 systemd-logind[2075]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:09:12.042596 systemd-logind[2075]: Removed session 22. Sep 13 00:09:25.663748 kubelet[3333]: E0913 00:09:25.659932 3333 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:09:25.772247 systemd[1]: run-containerd-runc-k8s.io-adcbf58e85eae02a58ff62fc323350e1a086f6770563cc66ef9e195bf585120d-runc.9nvPhr.mount: Deactivated successfully. Sep 13 00:09:25.929700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c-rootfs.mount: Deactivated successfully. Sep 13 00:09:25.951834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e-rootfs.mount: Deactivated successfully. Sep 13 00:09:25.982349 containerd[2108]: time="2025-09-13T00:09:25.956959762Z" level=info msg="shim disconnected" id=3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c namespace=k8s.io Sep 13 00:09:25.986501 containerd[2108]: time="2025-09-13T00:09:25.982361086Z" level=warning msg="cleaning up after shim disconnected" id=3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c namespace=k8s.io Sep 13 00:09:25.986501 containerd[2108]: time="2025-09-13T00:09:25.982379139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:25.986501 containerd[2108]: time="2025-09-13T00:09:25.957851633Z" level=info msg="shim disconnected" id=249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e namespace=k8s.io Sep 13 00:09:25.986501 containerd[2108]: time="2025-09-13T00:09:25.982430892Z" level=warning msg="cleaning up after shim disconnected" id=249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e namespace=k8s.io Sep 13 00:09:25.986501 containerd[2108]: time="2025-09-13T00:09:25.982440554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:26.065549 containerd[2108]: time="2025-09-13T00:09:26.065466377Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:26.066386 containerd[2108]: time="2025-09-13T00:09:26.066345905Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:26.223766 kubelet[3333]: I0913 00:09:26.223619 3333 scope.go:117] "RemoveContainer" containerID="249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e" Sep 13 00:09:26.230053 kubelet[3333]: I0913 00:09:26.230016 3333 scope.go:117] "RemoveContainer" containerID="3b263db15b3fe73d41ce1d9b1fc1f1ea5a02128dcc7b759256622b5584dce00c" Sep 13 00:09:26.314316 containerd[2108]: time="2025-09-13T00:09:26.314146548Z" level=info msg="CreateContainer within sandbox \"11476d4c949e2da569234c475506707b7011b90d15878823a1547272b2fd026a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:09:26.314533 containerd[2108]: time="2025-09-13T00:09:26.314335219Z" level=info msg="CreateContainer within sandbox \"79196e30514967e6405eb826a62d2ac904c032b9f809e4994359d79c44e2dbf5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:09:26.462198 containerd[2108]: time="2025-09-13T00:09:26.461335901Z" level=info msg="CreateContainer within sandbox \"11476d4c949e2da569234c475506707b7011b90d15878823a1547272b2fd026a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155\"" Sep 13 00:09:26.462472 containerd[2108]: time="2025-09-13T00:09:26.462442088Z" level=info msg="CreateContainer within sandbox \"79196e30514967e6405eb826a62d2ac904c032b9f809e4994359d79c44e2dbf5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e120cfa4408b122e2e21a8b439bc2088115251b73a7b5df3bbe140c8ec2ecb9d\"" Sep 13 00:09:26.465442 containerd[2108]: time="2025-09-13T00:09:26.465402369Z" level=info msg="StartContainer for \"3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155\"" Sep 13 00:09:26.466392 containerd[2108]: time="2025-09-13T00:09:26.466366545Z" level=info msg="StartContainer for \"e120cfa4408b122e2e21a8b439bc2088115251b73a7b5df3bbe140c8ec2ecb9d\"" Sep 13 00:09:26.614939 containerd[2108]: time="2025-09-13T00:09:26.613046012Z" level=info msg="StartContainer for \"e120cfa4408b122e2e21a8b439bc2088115251b73a7b5df3bbe140c8ec2ecb9d\" returns successfully" Sep 13 00:09:26.630007 containerd[2108]: time="2025-09-13T00:09:26.629960115Z" level=info msg="StartContainer for \"3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155\" returns successfully" Sep 13 00:09:26.759599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577673372.mount: Deactivated successfully. Sep 13 00:09:26.759775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161105278.mount: Deactivated successfully. Sep 13 00:09:29.298005 systemd-resolved[1989]: Under memory pressure, flushing caches. Sep 13 00:09:29.304124 systemd-journald[1580]: Under memory pressure, flushing caches. Sep 13 00:09:29.298041 systemd-resolved[1989]: Flushed all caches. Sep 13 00:09:30.732724 containerd[2108]: time="2025-09-13T00:09:30.732089862Z" level=info msg="shim disconnected" id=bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5 namespace=k8s.io Sep 13 00:09:30.732724 containerd[2108]: time="2025-09-13T00:09:30.732171704Z" level=warning msg="cleaning up after shim disconnected" id=bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5 namespace=k8s.io Sep 13 00:09:30.732724 containerd[2108]: time="2025-09-13T00:09:30.732186044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:30.738681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5-rootfs.mount: Deactivated successfully. Sep 13 00:09:31.228997 kubelet[3333]: I0913 00:09:31.228954 3333 scope.go:117] "RemoveContainer" containerID="bea4ea23499181e46ce8d41818012b766244527a2e063f8430ebe9d2a5aac3d5" Sep 13 00:09:31.232359 containerd[2108]: time="2025-09-13T00:09:31.232317879Z" level=info msg="CreateContainer within sandbox \"6eb0194975b0b27563a89499031af33e61b7146008a02159a9a4c6427c4bbb1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:09:31.259741 containerd[2108]: time="2025-09-13T00:09:31.259652581Z" level=info msg="CreateContainer within sandbox \"6eb0194975b0b27563a89499031af33e61b7146008a02159a9a4c6427c4bbb1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f5dad8739cdc666a86b4cc42c0698a5ddb5385ac513a8c027f913ee0427ed6a8\"" Sep 13 00:09:31.260352 containerd[2108]: time="2025-09-13T00:09:31.260322112Z" level=info msg="StartContainer for \"f5dad8739cdc666a86b4cc42c0698a5ddb5385ac513a8c027f913ee0427ed6a8\"" Sep 13 00:09:31.353088 containerd[2108]: time="2025-09-13T00:09:31.353008276Z" level=info msg="StartContainer for \"f5dad8739cdc666a86b4cc42c0698a5ddb5385ac513a8c027f913ee0427ed6a8\" returns successfully" Sep 13 00:09:35.682478 kubelet[3333]: E0913 00:09:35.682335 3333 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-42?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 13 00:09:39.247189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155-rootfs.mount: Deactivated successfully. Sep 13 00:09:39.249896 containerd[2108]: time="2025-09-13T00:09:39.249834186Z" level=info msg="shim disconnected" id=3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155 namespace=k8s.io Sep 13 00:09:39.249896 containerd[2108]: time="2025-09-13T00:09:39.249894887Z" level=warning msg="cleaning up after shim disconnected" id=3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155 namespace=k8s.io Sep 13 00:09:39.250312 containerd[2108]: time="2025-09-13T00:09:39.249903179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:09:39.268729 containerd[2108]: time="2025-09-13T00:09:39.268131097Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:09:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:09:40.276618 kubelet[3333]: I0913 00:09:40.276547 3333 scope.go:117] "RemoveContainer" containerID="249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e" Sep 13 00:09:40.277768 kubelet[3333]: I0913 00:09:40.276975 3333 scope.go:117] "RemoveContainer" containerID="3ae54204f15a0e850df2bdaf763863236cd9f1da5e67a666bdbf75747890a155" Sep 13 00:09:40.318744 kubelet[3333]: E0913 00:09:40.295273 3333 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-cnj9l_tigera-operator(8b15c8f7-a3b3-439f-987e-21e1c01c1dd8)\"" pod="tigera-operator/tigera-operator-58fc44c59b-cnj9l" podUID="8b15c8f7-a3b3-439f-987e-21e1c01c1dd8" Sep 13 00:09:40.432912 containerd[2108]: time="2025-09-13T00:09:40.432843133Z" level=info msg="RemoveContainer for \"249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e\"" Sep 13 00:09:40.457053 containerd[2108]: time="2025-09-13T00:09:40.456997221Z" level=info msg="RemoveContainer for \"249d504ac1a07a2226cd79ecd199b4ccf16f69c7b91115c91130cbd36bc3321e\" returns successfully" Sep 13 00:09:45.709272 kubelet[3333]: E0913 00:09:45.705771 3333 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-25-42)"