Sep 13 00:05:13.904191 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:05:13.904231 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:13.904249 kernel: BIOS-provided physical RAM map: Sep 13 00:05:13.904260 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:05:13.904270 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:05:13.904281 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 13 00:05:13.904294 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 13 00:05:13.904305 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:05:13.904317 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:05:13.904331 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:05:13.904343 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:05:13.904355 kernel: NX (Execute Disable) protection: active Sep 13 00:05:13.904366 kernel: APIC: Static calls initialized Sep 13 00:05:13.904378 kernel: efi: EFI v2.7 by EDK II Sep 13 00:05:13.904393 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 13 00:05:13.904409 kernel: SMBIOS 2.7 present. Sep 13 00:05:13.904422 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:05:13.904435 kernel: Hypervisor detected: KVM Sep 13 00:05:13.904448 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:05:13.904461 kernel: kvm-clock: using sched offset of 4127922718 cycles Sep 13 00:05:13.904474 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:05:13.904505 kernel: tsc: Detected 2499.996 MHz processor Sep 13 00:05:13.904517 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:05:13.904530 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:05:13.905307 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:05:13.905329 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:05:13.905344 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:05:13.905359 kernel: Using GB pages for direct mapping Sep 13 00:05:13.905373 kernel: Secure boot disabled Sep 13 00:05:13.905386 kernel: ACPI: Early table checksum verification disabled Sep 13 00:05:13.905400 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:05:13.905412 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:05:13.905425 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:05:13.905439 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:05:13.905456 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:05:13.905470 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:05:13.905482 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:05:13.905529 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:05:13.905542 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:05:13.905555 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:05:13.905575 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:05:13.905592 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:05:13.905607 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:05:13.905622 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:05:13.905636 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:05:13.905651 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:05:13.905665 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:05:13.905684 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:05:13.905698 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:05:13.905713 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:05:13.905727 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:05:13.905742 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:05:13.905757 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:05:13.905772 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:05:13.905787 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:05:13.905800 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:05:13.905815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:05:13.905834 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:05:13.905847 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:05:13.905861 kernel: Zone ranges: Sep 13 00:05:13.905876 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:05:13.905893 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:05:13.905907 kernel: Normal empty Sep 13 00:05:13.905921 kernel: Movable zone start for each node Sep 13 00:05:13.905935 kernel: Early memory node ranges Sep 13 00:05:13.905950 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:05:13.905969 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:05:13.905987 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:05:13.906002 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:05:13.906017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:05:13.906034 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:05:13.906050 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:05:13.906066 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:05:13.906082 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:05:13.906100 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:05:13.906121 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:05:13.906136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:05:13.906151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:05:13.906167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:05:13.906183 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:05:13.906197 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:05:13.906214 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:05:13.906231 kernel: TSC deadline timer available Sep 13 00:05:13.906249 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:05:13.906270 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:05:13.906285 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:05:13.906302 kernel: Booting paravirtualized kernel on KVM Sep 13 00:05:13.906317 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:05:13.906332 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:05:13.906346 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:05:13.906362 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:05:13.906377 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:05:13.906391 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:05:13.906404 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:05:13.906425 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:13.906440 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:05:13.906454 kernel: random: crng init done Sep 13 00:05:13.906469 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:05:13.906485 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:05:13.907590 kernel: Fallback order for Node 0: 0 Sep 13 00:05:13.907608 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:05:13.907629 kernel: Policy zone: DMA32 Sep 13 00:05:13.907645 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:05:13.907661 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162936K reserved, 0K cma-reserved) Sep 13 00:05:13.907677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:05:13.907692 kernel: Kernel/User page tables isolation: enabled Sep 13 00:05:13.907708 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:05:13.907723 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:05:13.907739 kernel: Dynamic Preempt: voluntary Sep 13 00:05:13.907754 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:05:13.907775 kernel: rcu: RCU event tracing is enabled. Sep 13 00:05:13.907790 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:05:13.907806 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:05:13.907822 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:05:13.907838 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:05:13.907854 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:05:13.907869 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:05:13.907885 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:05:13.907925 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:05:13.907939 kernel: Console: colour dummy device 80x25 Sep 13 00:05:13.907953 kernel: printk: console [tty0] enabled Sep 13 00:05:13.907966 kernel: printk: console [ttyS0] enabled Sep 13 00:05:13.908061 kernel: ACPI: Core revision 20230628 Sep 13 00:05:13.908078 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:05:13.908095 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:05:13.908112 kernel: x2apic enabled Sep 13 00:05:13.908129 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:05:13.908146 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 13 00:05:13.908166 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 13 00:05:13.908183 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:05:13.908199 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:05:13.908214 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:05:13.908230 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:05:13.908246 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:05:13.908262 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:05:13.908279 kernel: RETBleed: Vulnerable Sep 13 00:05:13.908295 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:05:13.908315 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:05:13.908331 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:05:13.908347 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:05:13.908364 kernel: active return thunk: its_return_thunk Sep 13 00:05:13.908380 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:05:13.908397 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:05:13.908413 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:05:13.908430 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:05:13.908446 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:05:13.908463 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:05:13.908479 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:05:13.908509 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:05:13.908524 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:05:13.908541 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:05:13.908558 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:05:13.908574 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:05:13.908590 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:05:13.908606 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:05:13.908623 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:05:13.908639 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:05:13.908655 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:05:13.908672 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:05:13.908688 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:05:13.908708 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:05:13.908724 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:05:13.908740 kernel: landlock: Up and running. Sep 13 00:05:13.908756 kernel: SELinux: Initializing. Sep 13 00:05:13.908773 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:13.908789 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:05:13.908806 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:05:13.908822 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:13.908839 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:13.908856 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:05:13.908876 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:05:13.908893 kernel: signal: max sigframe size: 3632 Sep 13 00:05:13.908909 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:05:13.908927 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:05:13.908943 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:05:13.908960 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:05:13.908977 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:05:13.908993 kernel: .... node #0, CPUs: #1 Sep 13 00:05:13.909011 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:05:13.909032 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:05:13.909048 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:05:13.909064 kernel: smpboot: Max logical packages: 1 Sep 13 00:05:13.909081 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 13 00:05:13.909097 kernel: devtmpfs: initialized Sep 13 00:05:13.909114 kernel: x86/mm: Memory block size: 128MB Sep 13 00:05:13.909130 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:05:13.909147 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:05:13.909167 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:05:13.909183 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:05:13.909200 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:05:13.909217 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:05:13.909233 kernel: audit: type=2000 audit(1757721914.266:1): state=initialized audit_enabled=0 res=1 Sep 13 00:05:13.909250 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:05:13.909266 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:05:13.909282 kernel: cpuidle: using governor menu Sep 13 00:05:13.909299 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:05:13.909318 kernel: dca service started, version 1.12.1 Sep 13 00:05:13.909335 kernel: PCI: Using configuration type 1 for base access Sep 13 00:05:13.909351 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:05:13.909367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:05:13.909384 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:05:13.909400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:05:13.909416 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:05:13.909433 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:05:13.909449 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:05:13.909469 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:05:13.909486 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:05:13.909763 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:05:13.909781 kernel: ACPI: Interpreter enabled Sep 13 00:05:13.909798 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:05:13.909815 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:05:13.909831 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:05:13.909847 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:05:13.909864 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:05:13.909881 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:05:13.910100 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:05:13.910252 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:05:13.910391 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:05:13.910412 kernel: acpiphp: Slot [3] registered Sep 13 00:05:13.910429 kernel: acpiphp: Slot [4] registered Sep 13 00:05:13.910445 kernel: acpiphp: Slot [5] registered Sep 13 00:05:13.910462 kernel: acpiphp: Slot [6] registered Sep 13 00:05:13.910482 kernel: acpiphp: Slot [7] registered Sep 13 00:05:13.910517 kernel: acpiphp: Slot [8] registered Sep 13 00:05:13.910534 kernel: acpiphp: Slot [9] registered Sep 13 00:05:13.910551 kernel: acpiphp: Slot [10] registered Sep 13 00:05:13.910567 kernel: acpiphp: Slot [11] registered Sep 13 00:05:13.910582 kernel: acpiphp: Slot [12] registered Sep 13 00:05:13.910598 kernel: acpiphp: Slot [13] registered Sep 13 00:05:13.910614 kernel: acpiphp: Slot [14] registered Sep 13 00:05:13.910629 kernel: acpiphp: Slot [15] registered Sep 13 00:05:13.910649 kernel: acpiphp: Slot [16] registered Sep 13 00:05:13.910665 kernel: acpiphp: Slot [17] registered Sep 13 00:05:13.910679 kernel: acpiphp: Slot [18] registered Sep 13 00:05:13.910696 kernel: acpiphp: Slot [19] registered Sep 13 00:05:13.910713 kernel: acpiphp: Slot [20] registered Sep 13 00:05:13.910729 kernel: acpiphp: Slot [21] registered Sep 13 00:05:13.910746 kernel: acpiphp: Slot [22] registered Sep 13 00:05:13.910762 kernel: acpiphp: Slot [23] registered Sep 13 00:05:13.910779 kernel: acpiphp: Slot [24] registered Sep 13 00:05:13.910796 kernel: acpiphp: Slot [25] registered Sep 13 00:05:13.910815 kernel: acpiphp: Slot [26] registered Sep 13 00:05:13.910830 kernel: acpiphp: Slot [27] registered Sep 13 00:05:13.910843 kernel: acpiphp: Slot [28] registered Sep 13 00:05:13.910859 kernel: acpiphp: Slot [29] registered Sep 13 00:05:13.910877 kernel: acpiphp: Slot [30] registered Sep 13 00:05:13.910893 kernel: acpiphp: Slot [31] registered Sep 13 00:05:13.910909 kernel: PCI host bridge to bus 0000:00 Sep 13 00:05:13.911064 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:05:13.911199 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:05:13.911328 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:05:13.911452 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:05:13.911605 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:05:13.911732 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:05:13.911912 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:05:13.912065 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:05:13.912220 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:05:13.912364 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:05:13.915355 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:05:13.915582 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:05:13.915729 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:05:13.915869 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:05:13.916017 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:05:13.916163 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:05:13.916310 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:05:13.916447 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:05:13.916635 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:05:13.916771 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:05:13.916908 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:05:13.917060 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:05:13.917198 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:05:13.917342 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:05:13.917482 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:05:13.917573 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:05:13.917590 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:05:13.917606 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:05:13.917622 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:05:13.917644 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:05:13.917660 kernel: iommu: Default domain type: Translated Sep 13 00:05:13.917676 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:05:13.917692 kernel: efivars: Registered efivars operations Sep 13 00:05:13.917709 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:05:13.917725 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:05:13.917741 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:05:13.917757 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:05:13.917898 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:05:13.918052 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:05:13.918189 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:05:13.918208 kernel: vgaarb: loaded Sep 13 00:05:13.918223 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:05:13.918239 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:05:13.918252 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:05:13.918270 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:05:13.918286 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:05:13.918305 kernel: pnp: PnP ACPI init Sep 13 00:05:13.918321 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:05:13.918336 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:05:13.918351 kernel: NET: Registered PF_INET protocol family Sep 13 00:05:13.918365 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:05:13.918381 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:05:13.918397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:05:13.918414 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:05:13.918430 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:05:13.918450 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:05:13.918463 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:13.918478 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:05:13.918537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:05:13.918553 kernel: NET: Registered PF_XDP protocol family Sep 13 00:05:13.918699 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:05:13.918825 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:05:13.918944 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:05:13.919063 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:05:13.919177 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:05:13.919315 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:05:13.919336 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:05:13.919352 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:05:13.919368 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 13 00:05:13.919384 kernel: clocksource: Switched to clocksource tsc Sep 13 00:05:13.919399 kernel: Initialise system trusted keyrings Sep 13 00:05:13.919415 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:05:13.919434 kernel: Key type asymmetric registered Sep 13 00:05:13.919449 kernel: Asymmetric key parser 'x509' registered Sep 13 00:05:13.919465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:05:13.919480 kernel: io scheduler mq-deadline registered Sep 13 00:05:13.920168 kernel: io scheduler kyber registered Sep 13 00:05:13.920188 kernel: io scheduler bfq registered Sep 13 00:05:13.920206 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:05:13.920223 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:05:13.920239 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:05:13.920261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:05:13.920278 kernel: i8042: Warning: Keylock active Sep 13 00:05:13.920294 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:05:13.920310 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:05:13.920478 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:05:13.920973 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:05:13.921093 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:05:13 UTC (1757721913) Sep 13 00:05:13.921207 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:05:13.921230 kernel: intel_pstate: CPU model not supported Sep 13 00:05:13.921245 kernel: efifb: probing for efifb Sep 13 00:05:13.921261 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 13 00:05:13.921276 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:05:13.921291 kernel: efifb: scrolling: redraw Sep 13 00:05:13.921306 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:05:13.921321 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:05:13.921336 kernel: fb0: EFI VGA frame buffer device Sep 13 00:05:13.921351 kernel: pstore: Using crash dump compression: deflate Sep 13 00:05:13.921369 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:05:13.921384 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:05:13.921399 kernel: Segment Routing with IPv6 Sep 13 00:05:13.921414 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:05:13.921429 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:05:13.921445 kernel: Key type dns_resolver registered Sep 13 00:05:13.921481 kernel: IPI shorthand broadcast: enabled Sep 13 00:05:13.921520 kernel: sched_clock: Marking stable (484002969, 127021834)->(675015886, -63991083) Sep 13 00:05:13.921536 kernel: registered taskstats version 1 Sep 13 00:05:13.921555 kernel: Loading compiled-in X.509 certificates Sep 13 00:05:13.921571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:05:13.921586 kernel: Key type .fscrypt registered Sep 13 00:05:13.921602 kernel: Key type fscrypt-provisioning registered Sep 13 00:05:13.921617 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:05:13.921633 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:05:13.921649 kernel: ima: No architecture policies found Sep 13 00:05:13.921665 kernel: clk: Disabling unused clocks Sep 13 00:05:13.921684 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:05:13.921700 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:05:13.921714 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:05:13.921729 kernel: Run /init as init process Sep 13 00:05:13.921745 kernel: with arguments: Sep 13 00:05:13.921761 kernel: /init Sep 13 00:05:13.921777 kernel: with environment: Sep 13 00:05:13.921793 kernel: HOME=/ Sep 13 00:05:13.921808 kernel: TERM=linux Sep 13 00:05:13.921824 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:05:13.921847 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:13.921868 systemd[1]: Detected virtualization amazon. Sep 13 00:05:13.921885 systemd[1]: Detected architecture x86-64. Sep 13 00:05:13.921902 systemd[1]: Running in initrd. Sep 13 00:05:13.921920 systemd[1]: No hostname configured, using default hostname. Sep 13 00:05:13.921937 systemd[1]: Hostname set to . Sep 13 00:05:13.921957 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:13.921972 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:05:13.921987 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:13.922002 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:13.922018 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:05:13.922035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:13.922053 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:05:13.922075 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:05:13.922097 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:05:13.922115 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:05:13.922133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:13.922147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:13.922166 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:13.922183 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:13.922200 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:13.922215 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:13.922234 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:13.922250 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:13.922267 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:05:13.922285 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:05:13.922303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:13.922324 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:13.922341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:13.922357 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:13.922373 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:05:13.922390 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:13.922408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:05:13.922427 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:05:13.922445 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:13.922468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:13.925317 systemd-journald[178]: Collecting audit messages is disabled. Sep 13 00:05:13.925374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:13.925393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:13.925420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:13.925446 systemd-journald[178]: Journal started Sep 13 00:05:13.925539 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2d2fed94840ccc3ccbd9db79c26125) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:05:13.930561 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:13.929925 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:05:13.930587 systemd-modules-load[179]: Inserted module 'overlay' Sep 13 00:05:13.943276 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:05:13.954767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:13.958238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:13.960964 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:05:13.970592 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:13.978692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:13.990632 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:05:13.991048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:13.997474 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 13 00:05:13.998185 kernel: Bridge firewalling registered Sep 13 00:05:13.999741 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:14.006776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:14.008647 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:05:14.010113 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:14.016800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:14.018750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:05:14.026880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:14.032718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:14.038013 dracut-cmdline[212]: dracut-dracut-053 Sep 13 00:05:14.042617 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:05:14.068073 systemd-resolved[216]: Positive Trust Anchors: Sep 13 00:05:14.068090 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:14.068125 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:14.073102 systemd-resolved[216]: Defaulting to hostname 'linux'. Sep 13 00:05:14.076068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:14.076528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:14.117522 kernel: SCSI subsystem initialized Sep 13 00:05:14.127516 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:05:14.139514 kernel: iscsi: registered transport (tcp) Sep 13 00:05:14.161600 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:05:14.161695 kernel: QLogic iSCSI HBA Driver Sep 13 00:05:14.202093 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:14.207717 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:05:14.234931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:05:14.235011 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:05:14.235033 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:05:14.278528 kernel: raid6: avx512x4 gen() 18054 MB/s Sep 13 00:05:14.296525 kernel: raid6: avx512x2 gen() 17926 MB/s Sep 13 00:05:14.314524 kernel: raid6: avx512x1 gen() 17920 MB/s Sep 13 00:05:14.332519 kernel: raid6: avx2x4 gen() 17838 MB/s Sep 13 00:05:14.350523 kernel: raid6: avx2x2 gen() 17777 MB/s Sep 13 00:05:14.368793 kernel: raid6: avx2x1 gen() 13097 MB/s Sep 13 00:05:14.368866 kernel: raid6: using algorithm avx512x4 gen() 18054 MB/s Sep 13 00:05:14.387699 kernel: raid6: .... xor() 7564 MB/s, rmw enabled Sep 13 00:05:14.387774 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:05:14.409540 kernel: xor: automatically using best checksumming function avx Sep 13 00:05:14.569528 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:05:14.579710 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:14.584690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:14.609439 systemd-udevd[398]: Using default interface naming scheme 'v255'. Sep 13 00:05:14.614614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:14.624790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:05:14.642263 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Sep 13 00:05:14.673202 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:14.682764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:14.734914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:14.741772 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:05:14.775155 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:14.777518 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:14.780065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:14.781170 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:14.787761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:05:14.822817 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:14.840534 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:05:14.860365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:14.861292 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:14.868351 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:05:14.868389 kernel: AES CTR mode by8 optimization enabled Sep 13 00:05:14.866872 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:14.867436 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:14.894972 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:05:14.895224 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:05:14.895394 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:05:14.896661 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:9c:fb:4f:f7:15 Sep 13 00:05:14.867674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:14.869637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:14.882854 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:14.889930 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:14.909125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:14.909249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:14.920779 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:05:14.921069 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:05:14.923430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:14.942123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:14.946825 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:05:14.951705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:05:14.963026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:05:14.963061 kernel: GPT:9289727 != 16777215 Sep 13 00:05:14.963089 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:05:14.963110 kernel: GPT:9289727 != 16777215 Sep 13 00:05:14.963129 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:05:14.963149 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:05:14.979762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:15.042646 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (450) Sep 13 00:05:15.055511 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (443) Sep 13 00:05:15.112199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 13 00:05:15.129158 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 13 00:05:15.138754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:05:15.154339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 13 00:05:15.154924 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 13 00:05:15.160711 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:05:15.169619 disk-uuid[630]: Primary Header is updated. Sep 13 00:05:15.169619 disk-uuid[630]: Secondary Entries is updated. Sep 13 00:05:15.169619 disk-uuid[630]: Secondary Header is updated. Sep 13 00:05:15.179548 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:05:15.186513 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:05:15.194566 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:05:16.194775 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:05:16.194842 disk-uuid[631]: The operation has completed successfully. Sep 13 00:05:16.302219 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:05:16.302325 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:05:16.330716 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:05:16.337923 sh[976]: Success Sep 13 00:05:16.359511 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:05:16.463072 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:05:16.478208 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:05:16.478976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:05:16.510100 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:05:16.510162 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:16.510176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:05:16.513390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:05:16.513458 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:05:16.579513 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:05:16.591120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:05:16.592363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:05:16.603753 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:05:16.605707 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:05:16.637570 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:16.637665 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:16.637689 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:05:16.648518 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:05:16.665134 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:16.664794 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:05:16.674036 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:05:16.681764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:05:16.714641 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:16.718729 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:16.753272 systemd-networkd[1168]: lo: Link UP Sep 13 00:05:16.753286 systemd-networkd[1168]: lo: Gained carrier Sep 13 00:05:16.755032 systemd-networkd[1168]: Enumeration completed Sep 13 00:05:16.755488 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:16.755502 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:16.756765 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:16.758948 systemd[1]: Reached target network.target - Network. Sep 13 00:05:16.759604 systemd-networkd[1168]: eth0: Link UP Sep 13 00:05:16.759609 systemd-networkd[1168]: eth0: Gained carrier Sep 13 00:05:16.759624 systemd-networkd[1168]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:16.779659 systemd-networkd[1168]: eth0: DHCPv4 address 172.31.17.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:05:17.045171 ignition[1121]: Ignition 2.19.0 Sep 13 00:05:17.045220 ignition[1121]: Stage: fetch-offline Sep 13 00:05:17.045454 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:17.045463 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:17.047032 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:17.045705 ignition[1121]: Ignition finished successfully Sep 13 00:05:17.054712 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:05:17.068388 ignition[1176]: Ignition 2.19.0 Sep 13 00:05:17.068399 ignition[1176]: Stage: fetch Sep 13 00:05:17.068783 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:17.068792 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:17.068874 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:17.082526 ignition[1176]: PUT result: OK Sep 13 00:05:17.085354 ignition[1176]: parsed url from cmdline: "" Sep 13 00:05:17.085364 ignition[1176]: no config URL provided Sep 13 00:05:17.085372 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:05:17.085385 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:05:17.085406 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:17.086174 ignition[1176]: PUT result: OK Sep 13 00:05:17.086212 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:05:17.087561 ignition[1176]: GET result: OK Sep 13 00:05:17.087626 ignition[1176]: parsing config with SHA512: e8aaccd6fa71812f25929391b7ea56c4fa499695a92b0ff171fd585c9cdbed95f8f2080d5e37d431c86e26fc821d0be4a0ced693845a78c2e50163792973c03a Sep 13 00:05:17.091615 unknown[1176]: fetched base config from "system" Sep 13 00:05:17.091627 unknown[1176]: fetched base config from "system" Sep 13 00:05:17.091633 unknown[1176]: fetched user config from "aws" Sep 13 00:05:17.096130 ignition[1176]: fetch: fetch complete Sep 13 00:05:17.096140 ignition[1176]: fetch: fetch passed Sep 13 00:05:17.096205 ignition[1176]: Ignition finished successfully Sep 13 00:05:17.099412 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:05:17.103815 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:05:17.124209 ignition[1182]: Ignition 2.19.0 Sep 13 00:05:17.124225 ignition[1182]: Stage: kargs Sep 13 00:05:17.124758 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:17.124772 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:17.124902 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:17.126142 ignition[1182]: PUT result: OK Sep 13 00:05:17.129613 ignition[1182]: kargs: kargs passed Sep 13 00:05:17.129743 ignition[1182]: Ignition finished successfully Sep 13 00:05:17.131598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:05:17.136833 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:05:17.152579 ignition[1189]: Ignition 2.19.0 Sep 13 00:05:17.152601 ignition[1189]: Stage: disks Sep 13 00:05:17.153076 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:17.153091 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:17.153220 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:17.154416 ignition[1189]: PUT result: OK Sep 13 00:05:17.157539 ignition[1189]: disks: disks passed Sep 13 00:05:17.157604 ignition[1189]: Ignition finished successfully Sep 13 00:05:17.158803 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:05:17.159858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:17.160681 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:05:17.160977 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:17.161562 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:17.162153 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:17.167728 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:05:17.210786 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:05:17.213819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:05:17.218667 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:05:17.318523 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:05:17.318740 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:05:17.320041 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:17.332629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:17.336589 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:05:17.338637 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:05:17.339999 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:05:17.340042 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:17.345823 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:05:17.350723 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:05:17.357562 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Sep 13 00:05:17.361567 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:17.361643 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:17.361665 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:05:17.379526 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:05:17.381056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:17.604097 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:05:17.619837 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:05:17.624842 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:05:17.629437 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:05:17.860260 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:17.868716 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:05:17.879721 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:05:17.882580 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:17.883416 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:05:17.911640 ignition[1331]: INFO : Ignition 2.19.0 Sep 13 00:05:17.911640 ignition[1331]: INFO : Stage: mount Sep 13 00:05:17.913031 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:17.913031 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:17.913031 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:17.914531 ignition[1331]: INFO : PUT result: OK Sep 13 00:05:17.916933 ignition[1331]: INFO : mount: mount passed Sep 13 00:05:17.918543 ignition[1331]: INFO : Ignition finished successfully Sep 13 00:05:17.919598 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:05:17.920332 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:05:17.924608 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:05:17.937684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:05:17.961674 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1344) Sep 13 00:05:17.961731 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:05:17.964632 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:05:17.964685 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:05:17.973614 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:05:17.976367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:05:17.995831 ignition[1360]: INFO : Ignition 2.19.0 Sep 13 00:05:17.995831 ignition[1360]: INFO : Stage: files Sep 13 00:05:17.997033 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:17.997033 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:17.997033 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:17.998116 ignition[1360]: INFO : PUT result: OK Sep 13 00:05:17.999344 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:05:18.010820 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:05:18.010820 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:05:18.027459 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:05:18.028751 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:05:18.028751 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:05:18.028145 unknown[1360]: wrote ssh authorized keys file for user: core Sep 13 00:05:18.030550 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:05:18.030550 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:05:18.105056 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:05:18.284626 systemd-networkd[1168]: eth0: Gained IPv6LL Sep 13 00:05:18.417106 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:05:18.417987 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:18.425708 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:18.425708 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:18.425708 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:05:18.848214 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:05:19.277105 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:05:19.277105 ignition[1360]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:05:19.279165 ignition[1360]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:05:19.280061 ignition[1360]: INFO : files: files passed Sep 13 00:05:19.280061 ignition[1360]: INFO : Ignition finished successfully Sep 13 00:05:19.281960 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:05:19.295509 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:05:19.297308 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:05:19.299807 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:05:19.300540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:05:19.311709 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:19.311709 initrd-setup-root-after-ignition[1390]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:19.314902 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:05:19.316926 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:05:19.317824 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:05:19.323725 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:05:19.348358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:05:19.348476 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:05:19.349733 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:05:19.350722 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:05:19.351522 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:05:19.357762 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:05:19.371520 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:05:19.380856 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:05:19.392655 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:19.393406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:19.394385 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:05:19.395176 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:05:19.395357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:05:19.396583 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:05:19.397400 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:05:19.398178 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:05:19.398956 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:05:19.399719 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:05:19.400650 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:05:19.401397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:05:19.402185 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:05:19.403262 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:05:19.404174 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:05:19.404850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:05:19.405033 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:05:19.406135 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:19.406936 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:19.407612 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:05:19.407757 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:19.408563 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:05:19.408745 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:05:19.410096 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:05:19.410291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:05:19.411006 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:05:19.411164 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:05:19.417855 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:05:19.419440 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:05:19.420559 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:19.423962 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:05:19.425878 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:05:19.427374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:19.432874 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:05:19.434893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:05:19.438918 ignition[1414]: INFO : Ignition 2.19.0 Sep 13 00:05:19.438918 ignition[1414]: INFO : Stage: umount Sep 13 00:05:19.440406 ignition[1414]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:05:19.440406 ignition[1414]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:05:19.443803 ignition[1414]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:05:19.443803 ignition[1414]: INFO : PUT result: OK Sep 13 00:05:19.444876 ignition[1414]: INFO : umount: umount passed Sep 13 00:05:19.444876 ignition[1414]: INFO : Ignition finished successfully Sep 13 00:05:19.447795 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:05:19.447950 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:05:19.450206 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:05:19.450333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:05:19.454537 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:05:19.454615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:05:19.455624 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:05:19.455687 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:05:19.458130 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:05:19.458192 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:05:19.458706 systemd[1]: Stopped target network.target - Network. Sep 13 00:05:19.459038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:05:19.459084 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:05:19.460085 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:05:19.461054 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:05:19.464556 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:19.465870 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:05:19.466880 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:05:19.467277 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:05:19.467338 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:05:19.468008 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:05:19.468063 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:05:19.469945 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:05:19.470020 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:05:19.470591 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:05:19.470657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:05:19.471435 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:05:19.472223 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:05:19.475823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:05:19.477679 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:05:19.477828 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:05:19.480389 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:05:19.480471 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:19.480694 systemd-networkd[1168]: eth0: DHCPv6 lease lost Sep 13 00:05:19.483351 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:05:19.483521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:05:19.485249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:05:19.485333 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:19.492684 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:05:19.493400 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:05:19.493477 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:05:19.494023 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:05:19.494081 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:19.496466 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:05:19.496556 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:19.497337 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:19.515182 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:05:19.515825 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:19.516690 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:05:19.516774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:05:19.518439 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:05:19.518821 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:19.519660 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:05:19.519697 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:19.520356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:05:19.520405 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:05:19.521462 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:05:19.521552 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:05:19.522647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:05:19.522694 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:05:19.533708 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:05:19.534801 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:05:19.534866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:19.536418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:05:19.536470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:19.540209 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:05:19.540320 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:05:19.619478 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:05:19.619645 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:05:19.624966 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:05:19.627211 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:05:19.627305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:05:19.638101 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:05:19.651342 systemd[1]: Switching root. Sep 13 00:05:19.700968 systemd-journald[178]: Journal stopped Sep 13 00:05:21.138703 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 13 00:05:21.138776 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:05:21.138792 kernel: SELinux: policy capability open_perms=1 Sep 13 00:05:21.138807 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:05:21.138819 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:05:21.138833 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:05:21.138846 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:05:21.138858 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:05:21.138874 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:05:21.138889 kernel: audit: type=1403 audit(1757721920.027:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:05:21.138903 systemd[1]: Successfully loaded SELinux policy in 63.492ms. Sep 13 00:05:21.138922 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.697ms. Sep 13 00:05:21.138939 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:21.138952 systemd[1]: Detected virtualization amazon. Sep 13 00:05:21.138967 systemd[1]: Detected architecture x86-64. Sep 13 00:05:21.138980 systemd[1]: Detected first boot. Sep 13 00:05:21.138993 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:05:21.139006 zram_generator::config[1458]: No configuration found. Sep 13 00:05:21.139020 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:05:21.139033 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:05:21.139046 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:05:21.139059 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:21.139075 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:05:21.139088 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:05:21.139101 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:05:21.139114 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:05:21.139126 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:05:21.139139 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:05:21.139155 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:05:21.139168 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:05:21.139183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:21.139195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:21.139208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:05:21.139221 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:05:21.139234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:05:21.139246 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:21.139258 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:05:21.139271 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:21.139283 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:05:21.139298 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:05:21.139311 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:21.139324 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:05:21.139336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:21.139350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:21.139363 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:21.139375 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:21.139388 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:05:21.139403 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:05:21.139416 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:21.139428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:21.139441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:21.139454 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:05:21.139466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:05:21.139479 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:05:21.149915 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:05:21.149951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:21.149972 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:05:21.149986 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:05:21.149999 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:05:21.150013 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:05:21.152526 systemd[1]: Reached target machines.target - Containers. Sep 13 00:05:21.152571 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:05:21.152586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:21.152600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:21.152613 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:05:21.152634 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:21.152647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:21.152660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:21.152673 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:05:21.152685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:21.152698 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:05:21.152711 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:05:21.152724 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:05:21.152740 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:05:21.152753 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:05:21.152766 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:21.152778 kernel: loop: module loaded Sep 13 00:05:21.152792 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:21.152805 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:05:21.152817 kernel: fuse: init (API version 7.39) Sep 13 00:05:21.152830 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:05:21.152844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:21.152859 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:05:21.152872 systemd[1]: Stopped verity-setup.service. Sep 13 00:05:21.152886 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:21.152899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:05:21.152912 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:05:21.152925 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:05:21.152937 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:05:21.152950 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:05:21.152963 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:05:21.152979 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:21.152992 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:05:21.153005 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:05:21.153018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:21.153034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:21.153046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:21.153059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:21.153072 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:05:21.153087 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:05:21.153100 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:21.153115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:21.153129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:21.153142 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:05:21.153155 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:05:21.153202 systemd-journald[1536]: Collecting audit messages is disabled. Sep 13 00:05:21.153227 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:05:21.153240 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:05:21.153255 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:05:21.153269 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:05:21.153281 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:21.153295 systemd-journald[1536]: Journal started Sep 13 00:05:21.153320 systemd-journald[1536]: Runtime Journal (/run/log/journal/ec2d2fed94840ccc3ccbd9db79c26125) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:05:20.830144 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:05:20.877013 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 13 00:05:20.877427 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:05:21.156780 kernel: ACPI: bus type drm_connector registered Sep 13 00:05:21.159508 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:05:21.168509 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:05:21.179063 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:05:21.179134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:21.186157 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:05:21.186235 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:21.197504 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:05:21.200514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:21.210228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:21.216546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:05:21.221891 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:21.224986 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:05:21.226275 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:21.226561 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:21.227699 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:05:21.230752 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:05:21.232562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:21.233299 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:05:21.233937 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:05:21.249349 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:05:21.256231 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:05:21.263449 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:05:21.266949 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:05:21.275724 systemd-journald[1536]: Time spent on flushing to /var/log/journal/ec2d2fed94840ccc3ccbd9db79c26125 is 49.881ms for 988 entries. Sep 13 00:05:21.275724 systemd-journald[1536]: System Journal (/var/log/journal/ec2d2fed94840ccc3ccbd9db79c26125) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:05:21.341507 systemd-journald[1536]: Received client request to flush runtime journal. Sep 13 00:05:21.277911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:05:21.279679 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:05:21.280689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:21.326829 udevadm[1597]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:05:21.343292 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:05:21.359849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:05:21.360760 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:05:21.371067 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:05:21.384036 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:05:21.382547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:21.417075 kernel: loop1: detected capacity change from 0 to 61336 Sep 13 00:05:21.424519 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Sep 13 00:05:21.424547 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Sep 13 00:05:21.439819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:21.540111 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:05:21.656671 kernel: loop3: detected capacity change from 0 to 221472 Sep 13 00:05:21.767514 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 00:05:21.808519 kernel: loop5: detected capacity change from 0 to 61336 Sep 13 00:05:21.831724 kernel: loop6: detected capacity change from 0 to 142488 Sep 13 00:05:21.859528 kernel: loop7: detected capacity change from 0 to 221472 Sep 13 00:05:21.889730 (sd-merge)[1613]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 13 00:05:21.890284 (sd-merge)[1613]: Merged extensions into '/usr'. Sep 13 00:05:21.897607 systemd[1]: Reloading requested from client PID 1569 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:05:21.897624 systemd[1]: Reloading... Sep 13 00:05:21.988516 zram_generator::config[1638]: No configuration found. Sep 13 00:05:22.134667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:22.190323 systemd[1]: Reloading finished in 291 ms. Sep 13 00:05:22.218628 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:05:22.219804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:05:22.228760 systemd[1]: Starting ensure-sysext.service... Sep 13 00:05:22.230619 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:22.240782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:22.247850 systemd[1]: Reloading requested from client PID 1691 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:05:22.247950 systemd[1]: Reloading... Sep 13 00:05:22.260904 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:05:22.261454 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:05:22.262965 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:05:22.263431 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Sep 13 00:05:22.263550 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Sep 13 00:05:22.280753 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:22.281004 systemd-tmpfiles[1692]: Skipping /boot Sep 13 00:05:22.292359 systemd-udevd[1693]: Using default interface naming scheme 'v255'. Sep 13 00:05:22.303249 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:22.305379 systemd-tmpfiles[1692]: Skipping /boot Sep 13 00:05:22.374515 zram_generator::config[1721]: No configuration found. Sep 13 00:05:22.503354 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:22.519142 ldconfig[1565]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:05:22.647538 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:05:22.670521 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 00:05:22.678557 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1724) Sep 13 00:05:22.687512 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:05:22.733524 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:05:22.757513 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 13 00:05:22.765779 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:22.777522 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:05:22.894755 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:05:22.895948 systemd[1]: Reloading finished in 647 ms. Sep 13 00:05:22.915932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:22.918037 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:05:22.924211 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:05:22.924109 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:22.962104 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:05:22.963146 systemd[1]: Finished ensure-sysext.service. Sep 13 00:05:22.986047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:05:22.986751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:22.994691 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:22.998683 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:05:23.001578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:05:23.003668 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:05:23.008898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:23.018722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:23.023412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:23.026914 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:23.028483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:23.032682 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:05:23.048369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:05:23.059779 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:23.068870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:23.070318 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:05:23.073727 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:05:23.078120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:23.079575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:23.080677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:23.082538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:23.083820 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:23.084037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:23.085115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:23.085890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:23.090424 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:23.090878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:23.092209 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:23.103630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:23.103723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:23.117732 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:05:23.139337 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:05:23.152647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:05:23.159954 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:05:23.163594 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:05:23.175764 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:05:23.189902 augenrules[1925]: No rules Sep 13 00:05:23.190938 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:05:23.201808 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:05:23.203971 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:23.208510 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:05:23.222235 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:05:23.223125 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:05:23.236931 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:05:23.247576 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:05:23.248869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:05:23.294959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:05:23.336602 systemd-networkd[1903]: lo: Link UP Sep 13 00:05:23.336613 systemd-networkd[1903]: lo: Gained carrier Sep 13 00:05:23.338276 systemd-networkd[1903]: Enumeration completed Sep 13 00:05:23.338661 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:23.341686 systemd-networkd[1903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:23.341798 systemd-networkd[1903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:05:23.346551 systemd-networkd[1903]: eth0: Link UP Sep 13 00:05:23.346753 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:05:23.347778 systemd-networkd[1903]: eth0: Gained carrier Sep 13 00:05:23.349208 systemd-networkd[1903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:05:23.362590 systemd-networkd[1903]: eth0: DHCPv4 address 172.31.17.100/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:05:23.362826 systemd-resolved[1904]: Positive Trust Anchors: Sep 13 00:05:23.363168 systemd-resolved[1904]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:23.363310 systemd-resolved[1904]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:23.368731 systemd-resolved[1904]: Defaulting to hostname 'linux'. Sep 13 00:05:23.370664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:23.371206 systemd[1]: Reached target network.target - Network. Sep 13 00:05:23.371661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:05:23.372066 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:05:23.372559 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:05:23.372970 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:05:23.373483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:05:23.373966 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:05:23.374327 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:05:23.374715 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:05:23.374756 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:05:23.375114 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:05:23.377161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:05:23.379010 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:05:23.383541 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:05:23.384603 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:05:23.385091 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:05:23.385464 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:05:23.385879 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:05:23.385917 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:05:23.387015 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:05:23.391709 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:05:23.401807 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:05:23.404616 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:05:23.412752 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:05:23.415577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:05:23.425734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:05:23.431476 systemd[1]: Started ntpd.service - Network Time Service. Sep 13 00:05:23.437694 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:05:23.453624 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 13 00:05:23.456714 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:05:23.464873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:05:23.481710 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:05:23.482818 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:05:23.484966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:05:23.487218 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:05:23.491559 jq[1952]: false Sep 13 00:05:23.493658 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:05:23.503051 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:05:23.503320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:05:23.556521 update_engine[1967]: I20250913 00:05:23.555121 1967 main.cc:92] Flatcar Update Engine starting Sep 13 00:05:23.558923 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:05:23.559173 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:05:23.564096 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:05:23.564377 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:05:23.583085 jq[1968]: true Sep 13 00:05:23.589624 extend-filesystems[1953]: Found loop4 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found loop5 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found loop6 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found loop7 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p1 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p2 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p3 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found usr Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p4 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p6 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p7 Sep 13 00:05:23.593292 extend-filesystems[1953]: Found nvme0n1p9 Sep 13 00:05:23.593292 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Sep 13 00:05:23.596167 dbus-daemon[1951]: [system] SELinux support is enabled Sep 13 00:05:23.596385 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:05:23.602096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:05:23.602133 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:05:23.604689 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:05:23.604720 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:05:23.615530 tar[1974]: linux-amd64/helm Sep 13 00:05:23.615354 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:05:23.625898 coreos-metadata[1950]: Sep 13 00:05:23.625 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:05:23.626705 dbus-daemon[1951]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1903 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:05:23.631043 coreos-metadata[1950]: Sep 13 00:05:23.630 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 13 00:05:23.633464 update_engine[1967]: I20250913 00:05:23.632656 1967 update_check_scheduler.cc:74] Next update check in 7m23s Sep 13 00:05:23.634642 coreos-metadata[1950]: Sep 13 00:05:23.633 INFO Fetch successful Sep 13 00:05:23.634642 coreos-metadata[1950]: Sep 13 00:05:23.634 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 13 00:05:23.643038 coreos-metadata[1950]: Sep 13 00:05:23.642 INFO Fetch successful Sep 13 00:05:23.643038 coreos-metadata[1950]: Sep 13 00:05:23.642 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 13 00:05:23.639838 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 00:05:23.643886 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:05:23.646926 coreos-metadata[1950]: Sep 13 00:05:23.645 INFO Fetch successful Sep 13 00:05:23.646926 coreos-metadata[1950]: Sep 13 00:05:23.645 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 13 00:05:23.649596 coreos-metadata[1950]: Sep 13 00:05:23.648 INFO Fetch successful Sep 13 00:05:23.649596 coreos-metadata[1950]: Sep 13 00:05:23.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 13 00:05:23.649801 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:05:23.650825 coreos-metadata[1950]: Sep 13 00:05:23.650 INFO Fetch failed with 404: resource not found Sep 13 00:05:23.650825 coreos-metadata[1950]: Sep 13 00:05:23.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 13 00:05:23.651301 coreos-metadata[1950]: Sep 13 00:05:23.651 INFO Fetch successful Sep 13 00:05:23.651301 coreos-metadata[1950]: Sep 13 00:05:23.651 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 13 00:05:23.653520 coreos-metadata[1950]: Sep 13 00:05:23.652 INFO Fetch successful Sep 13 00:05:23.653520 coreos-metadata[1950]: Sep 13 00:05:23.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 13 00:05:23.653745 jq[1990]: true Sep 13 00:05:23.659626 coreos-metadata[1950]: Sep 13 00:05:23.654 INFO Fetch successful Sep 13 00:05:23.659626 coreos-metadata[1950]: Sep 13 00:05:23.654 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 13 00:05:23.660176 coreos-metadata[1950]: Sep 13 00:05:23.659 INFO Fetch successful Sep 13 00:05:23.660176 coreos-metadata[1950]: Sep 13 00:05:23.660 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 13 00:05:23.660933 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:05:23.664587 coreos-metadata[1950]: Sep 13 00:05:23.662 INFO Fetch successful Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: ---------------------------------------------------- Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: corporation. Support and training for ntp-4 are Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: available at https://www.nwtime.org/support Sep 13 00:05:23.664647 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: ---------------------------------------------------- Sep 13 00:05:23.660963 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:05:23.660974 ntpd[1955]: ---------------------------------------------------- Sep 13 00:05:23.660984 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:05:23.660994 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:05:23.661004 ntpd[1955]: corporation. Support and training for ntp-4 are Sep 13 00:05:23.661014 ntpd[1955]: available at https://www.nwtime.org/support Sep 13 00:05:23.661026 ntpd[1955]: ---------------------------------------------------- Sep 13 00:05:23.673137 ntpd[1955]: proto: precision = 0.070 usec (-24) Sep 13 00:05:23.673276 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: proto: precision = 0.070 usec (-24) Sep 13 00:05:23.674119 ntpd[1955]: basedate set to 2025-08-31 Sep 13 00:05:23.678590 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: basedate set to 2025-08-31 Sep 13 00:05:23.678590 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: gps base set to 2025-08-31 (week 2382) Sep 13 00:05:23.674145 ntpd[1955]: gps base set to 2025-08-31 (week 2382) Sep 13 00:05:23.689102 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Sep 13 00:05:23.692019 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:05:23.694931 extend-filesystems[2005]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Listen normally on 3 eth0 172.31.17.100:123 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Listen normally on 4 lo [::1]:123 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: bind(21) AF_INET6 fe80::49c:fbff:fe4f:f715%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: unable to create socket on eth0 (5) for fe80::49c:fbff:fe4f:f715%2#123 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: failed to init interface for address fe80::49c:fbff:fe4f:f715%2 Sep 13 00:05:23.696809 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Sep 13 00:05:23.692079 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:05:23.692271 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:05:23.692309 ntpd[1955]: Listen normally on 3 eth0 172.31.17.100:123 Sep 13 00:05:23.692350 ntpd[1955]: Listen normally on 4 lo [::1]:123 Sep 13 00:05:23.692397 ntpd[1955]: bind(21) AF_INET6 fe80::49c:fbff:fe4f:f715%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:05:23.692419 ntpd[1955]: unable to create socket on eth0 (5) for fe80::49c:fbff:fe4f:f715%2#123 Sep 13 00:05:23.692434 ntpd[1955]: failed to init interface for address fe80::49c:fbff:fe4f:f715%2 Sep 13 00:05:23.692463 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Sep 13 00:05:23.701048 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:05:23.704115 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:05:23.704115 ntpd[1955]: 13 Sep 00:05:23 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:05:23.701083 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:05:23.706509 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:05:23.721571 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 13 00:05:23.779914 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:05:23.782855 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:05:23.836510 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:05:23.868656 extend-filesystems[2005]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:05:23.868656 extend-filesystems[2005]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:05:23.868656 extend-filesystems[2005]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:05:23.876035 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:05:23.873622 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:05:23.874584 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:05:23.883513 systemd-logind[1964]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:05:23.902595 systemd-logind[1964]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 13 00:05:23.902628 systemd-logind[1964]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:05:23.907420 systemd-logind[1964]: New seat seat0. Sep 13 00:05:23.909005 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:05:23.922054 bash[2033]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:23.924261 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:05:23.939923 systemd[1]: Starting sshkeys.service... Sep 13 00:05:23.963408 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1732) Sep 13 00:05:24.003021 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:05:24.013960 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:05:24.067004 coreos-metadata[2048]: Sep 13 00:05:24.066 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:05:24.067004 coreos-metadata[2048]: Sep 13 00:05:24.066 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 13 00:05:24.077591 coreos-metadata[2048]: Sep 13 00:05:24.077 INFO Fetch successful Sep 13 00:05:24.077591 coreos-metadata[2048]: Sep 13 00:05:24.077 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:05:24.078519 coreos-metadata[2048]: Sep 13 00:05:24.078 INFO Fetch successful Sep 13 00:05:24.095453 unknown[2048]: wrote ssh authorized keys file for user: core Sep 13 00:05:24.138866 update-ssh-keys[2067]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:05:24.141733 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:05:24.149115 systemd[1]: Finished sshkeys.service. Sep 13 00:05:24.173090 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:05:24.173429 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 00:05:24.178672 dbus-daemon[1951]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:05:24.190946 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 00:05:24.258596 polkitd[2079]: Started polkitd version 121 Sep 13 00:05:24.283404 polkitd[2079]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:05:24.283545 polkitd[2079]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:05:24.289758 polkitd[2079]: Finished loading, compiling and executing 2 rules Sep 13 00:05:24.294004 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:05:24.294241 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 00:05:24.295263 polkitd[2079]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:05:24.348266 locksmithd[1998]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:05:24.367781 systemd-resolved[1904]: System hostname changed to 'ip-172-31-17-100'. Sep 13 00:05:24.368218 systemd-hostnamed[1993]: Hostname set to (transient) Sep 13 00:05:24.411545 containerd[1977]: time="2025-09-13T00:05:24.411379320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:05:24.480654 containerd[1977]: time="2025-09-13T00:05:24.480565638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.485595944Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.485654921Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.485681708Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.485881354Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.485905686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.485981164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.486001141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.486246818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.486271378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.486294705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:24.486973 containerd[1977]: time="2025-09-13T00:05:24.486311254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.487405 containerd[1977]: time="2025-09-13T00:05:24.486415800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.487405 containerd[1977]: time="2025-09-13T00:05:24.486724696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:05:24.487700 containerd[1977]: time="2025-09-13T00:05:24.487670906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:05:24.488002 containerd[1977]: time="2025-09-13T00:05:24.487761631Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:05:24.488002 containerd[1977]: time="2025-09-13T00:05:24.487901512Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:05:24.488002 containerd[1977]: time="2025-09-13T00:05:24.487965181Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:05:24.492612 systemd-networkd[1903]: eth0: Gained IPv6LL Sep 13 00:05:24.493595 containerd[1977]: time="2025-09-13T00:05:24.493475759Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:05:24.495667 containerd[1977]: time="2025-09-13T00:05:24.493788663Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:05:24.495667 containerd[1977]: time="2025-09-13T00:05:24.493821933Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:05:24.495667 containerd[1977]: time="2025-09-13T00:05:24.493844278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:05:24.495667 containerd[1977]: time="2025-09-13T00:05:24.493865307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:05:24.495667 containerd[1977]: time="2025-09-13T00:05:24.494034898Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.497718311Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.497916710Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.497944587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.497965317Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.497986156Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498007846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498026574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498047998Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498071960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498091550Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498109998Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498127570Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498156600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499106 containerd[1977]: time="2025-09-13T00:05:24.498191775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498210127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498230175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498250455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498271246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498290330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498310419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498330088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498351003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498369447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498389870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498408202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498434806Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:05:24.499669 containerd[1977]: time="2025-09-13T00:05:24.498468793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.500142382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.500184587Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.500282094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501117418Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501149818Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501175578Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501194619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501214678Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501236598Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:05:24.502434 containerd[1977]: time="2025-09-13T00:05:24.501251401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:05:24.500710 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:05:24.502474 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:05:24.503007 containerd[1977]: time="2025-09-13T00:05:24.501707141Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:05:24.503007 containerd[1977]: time="2025-09-13T00:05:24.501795999Z" level=info msg="Connect containerd service" Sep 13 00:05:24.503007 containerd[1977]: time="2025-09-13T00:05:24.501853423Z" level=info msg="using legacy CRI server" Sep 13 00:05:24.503007 containerd[1977]: time="2025-09-13T00:05:24.501867932Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:05:24.503007 containerd[1977]: time="2025-09-13T00:05:24.502040260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.508739897Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509186354Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509242154Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509287774Z" level=info msg="Start subscribing containerd event" Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509341194Z" level=info msg="Start recovering state" Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509439391Z" level=info msg="Start event monitor" Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509456303Z" level=info msg="Start snapshots syncer" Sep 13 00:05:24.509544 containerd[1977]: time="2025-09-13T00:05:24.509469765Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:05:24.513194 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 13 00:05:24.518685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:24.519870 containerd[1977]: time="2025-09-13T00:05:24.519298690Z" level=info msg="Start streaming server" Sep 13 00:05:24.525862 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:05:24.540242 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:05:24.540858 containerd[1977]: time="2025-09-13T00:05:24.540164401Z" level=info msg="containerd successfully booted in 0.130401s" Sep 13 00:05:24.618331 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:05:24.692733 amazon-ssm-agent[2155]: Initializing new seelog logger Sep 13 00:05:24.694507 amazon-ssm-agent[2155]: New Seelog Logger Creation Complete Sep 13 00:05:24.694507 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.694507 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.695706 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 processing appconfig overrides Sep 13 00:05:24.699750 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.699750 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.699750 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 processing appconfig overrides Sep 13 00:05:24.699750 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.699750 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.699750 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 processing appconfig overrides Sep 13 00:05:24.702967 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO Proxy environment variables: Sep 13 00:05:24.706699 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.706699 amazon-ssm-agent[2155]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:05:24.706699 amazon-ssm-agent[2155]: 2025/09/13 00:05:24 processing appconfig overrides Sep 13 00:05:24.804482 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO https_proxy: Sep 13 00:05:24.906169 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO http_proxy: Sep 13 00:05:25.004413 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO no_proxy: Sep 13 00:05:25.103802 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO Checking if agent identity type OnPrem can be assumed Sep 13 00:05:25.141526 tar[1974]: linux-amd64/LICENSE Sep 13 00:05:25.141526 tar[1974]: linux-amd64/README.md Sep 13 00:05:25.157438 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:05:25.202213 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO Checking if agent identity type EC2 can be assumed Sep 13 00:05:25.277094 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:05:25.301450 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO Agent will take identity from EC2 Sep 13 00:05:25.314411 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:05:25.323615 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:05:25.343740 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:05:25.344179 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:05:25.353635 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:05:25.368585 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:05:25.380713 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:05:25.391828 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:05:25.392759 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:05:25.401917 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] Starting Core Agent Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [Registrar] Starting registrar module Sep 13 00:05:25.486752 amazon-ssm-agent[2155]: 2025-09-13 00:05:24 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 13 00:05:25.487063 amazon-ssm-agent[2155]: 2025-09-13 00:05:25 INFO [EC2Identity] EC2 registration was successful. Sep 13 00:05:25.487063 amazon-ssm-agent[2155]: 2025-09-13 00:05:25 INFO [CredentialRefresher] credentialRefresher has started Sep 13 00:05:25.487063 amazon-ssm-agent[2155]: 2025-09-13 00:05:25 INFO [CredentialRefresher] Starting credentials refresher loop Sep 13 00:05:25.487063 amazon-ssm-agent[2155]: 2025-09-13 00:05:25 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 13 00:05:25.501538 amazon-ssm-agent[2155]: 2025-09-13 00:05:25 INFO [CredentialRefresher] Next credential rotation will be in 30.816661304616666 minutes Sep 13 00:05:26.500306 amazon-ssm-agent[2155]: 2025-09-13 00:05:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 13 00:05:26.601113 amazon-ssm-agent[2155]: 2025-09-13 00:05:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2194) started Sep 13 00:05:26.661347 ntpd[1955]: Listen normally on 6 eth0 [fe80::49c:fbff:fe4f:f715%2]:123 Sep 13 00:05:26.661721 ntpd[1955]: 13 Sep 00:05:26 ntpd[1955]: Listen normally on 6 eth0 [fe80::49c:fbff:fe4f:f715%2]:123 Sep 13 00:05:26.701297 amazon-ssm-agent[2155]: 2025-09-13 00:05:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 13 00:05:27.196886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:27.197998 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:05:27.200307 systemd[1]: Startup finished in 614ms (kernel) + 6.310s (initrd) + 7.233s (userspace) = 14.157s. Sep 13 00:05:27.203307 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:27.393440 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:05:27.398843 systemd[1]: Started sshd@0-172.31.17.100:22-139.178.89.65:59310.service - OpenSSH per-connection server daemon (139.178.89.65:59310). Sep 13 00:05:27.589386 sshd[2216]: Accepted publickey for core from 139.178.89.65 port 59310 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:27.592352 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:27.603289 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:05:27.608918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:05:27.612758 systemd-logind[1964]: New session 1 of user core. Sep 13 00:05:27.628367 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:05:27.637932 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:05:27.652964 (systemd)[2224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:05:27.794616 systemd[2224]: Queued start job for default target default.target. Sep 13 00:05:27.801632 systemd[2224]: Created slice app.slice - User Application Slice. Sep 13 00:05:27.801664 systemd[2224]: Reached target paths.target - Paths. Sep 13 00:05:27.801678 systemd[2224]: Reached target timers.target - Timers. Sep 13 00:05:27.803044 systemd[2224]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:05:27.815722 systemd[2224]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:05:27.815907 systemd[2224]: Reached target sockets.target - Sockets. Sep 13 00:05:27.815924 systemd[2224]: Reached target basic.target - Basic System. Sep 13 00:05:27.815971 systemd[2224]: Reached target default.target - Main User Target. Sep 13 00:05:27.816002 systemd[2224]: Startup finished in 154ms. Sep 13 00:05:27.816965 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:05:27.821726 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:05:27.972613 systemd[1]: Started sshd@1-172.31.17.100:22-139.178.89.65:59312.service - OpenSSH per-connection server daemon (139.178.89.65:59312). Sep 13 00:05:28.128934 sshd[2235]: Accepted publickey for core from 139.178.89.65 port 59312 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:28.130385 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:28.136474 systemd-logind[1964]: New session 2 of user core. Sep 13 00:05:28.138731 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:05:28.263338 sshd[2235]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:28.266900 systemd[1]: sshd@1-172.31.17.100:22-139.178.89.65:59312.service: Deactivated successfully. Sep 13 00:05:28.268873 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:05:28.271168 systemd-logind[1964]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:05:28.273107 systemd-logind[1964]: Removed session 2. Sep 13 00:05:28.293774 systemd[1]: Started sshd@2-172.31.17.100:22-139.178.89.65:59316.service - OpenSSH per-connection server daemon (139.178.89.65:59316). Sep 13 00:05:28.458541 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 59316 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:28.459957 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:28.464319 systemd-logind[1964]: New session 3 of user core. Sep 13 00:05:28.473730 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:05:28.528029 kubelet[2210]: E0913 00:05:28.527901 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:28.530622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:28.530772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:28.531206 systemd[1]: kubelet.service: Consumed 1.136s CPU time. Sep 13 00:05:28.590798 sshd[2243]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:28.593846 systemd[1]: sshd@2-172.31.17.100:22-139.178.89.65:59316.service: Deactivated successfully. Sep 13 00:05:28.595433 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:05:28.596858 systemd-logind[1964]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:05:28.597793 systemd-logind[1964]: Removed session 3. Sep 13 00:05:28.620142 systemd[1]: Started sshd@3-172.31.17.100:22-139.178.89.65:47992.service - OpenSSH per-connection server daemon (139.178.89.65:47992). Sep 13 00:05:28.787550 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 47992 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:28.788843 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:28.794075 systemd-logind[1964]: New session 4 of user core. Sep 13 00:05:28.800812 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:05:28.920846 sshd[2251]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:28.923650 systemd[1]: sshd@3-172.31.17.100:22-139.178.89.65:47992.service: Deactivated successfully. Sep 13 00:05:28.925941 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:05:28.934445 systemd-logind[1964]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:05:28.937187 systemd-logind[1964]: Removed session 4. Sep 13 00:05:28.957946 systemd[1]: Started sshd@4-172.31.17.100:22-139.178.89.65:48004.service - OpenSSH per-connection server daemon (139.178.89.65:48004). Sep 13 00:05:29.112632 sshd[2258]: Accepted publickey for core from 139.178.89.65 port 48004 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:29.114059 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:29.119692 systemd-logind[1964]: New session 5 of user core. Sep 13 00:05:29.127744 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:05:29.260825 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:05:29.261130 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:29.279011 sudo[2261]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:29.302038 sshd[2258]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:29.305921 systemd[1]: sshd@4-172.31.17.100:22-139.178.89.65:48004.service: Deactivated successfully. Sep 13 00:05:29.307463 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:05:29.308474 systemd-logind[1964]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:05:29.309469 systemd-logind[1964]: Removed session 5. Sep 13 00:05:29.337869 systemd[1]: Started sshd@5-172.31.17.100:22-139.178.89.65:48014.service - OpenSSH per-connection server daemon (139.178.89.65:48014). Sep 13 00:05:29.495032 sshd[2266]: Accepted publickey for core from 139.178.89.65 port 48014 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:29.495317 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:29.500809 systemd-logind[1964]: New session 6 of user core. Sep 13 00:05:29.503732 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:05:29.607357 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:05:29.607680 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:29.611514 sudo[2270]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:29.617674 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:05:29.617967 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:29.642834 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:29.644762 auditctl[2273]: No rules Sep 13 00:05:29.645158 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:05:29.645340 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:29.654307 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:29.681074 augenrules[2291]: No rules Sep 13 00:05:29.682759 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:29.684579 sudo[2269]: pam_unix(sudo:session): session closed for user root Sep 13 00:05:29.707477 sshd[2266]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:29.710261 systemd[1]: sshd@5-172.31.17.100:22-139.178.89.65:48014.service: Deactivated successfully. Sep 13 00:05:29.712075 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:05:29.713396 systemd-logind[1964]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:05:29.714740 systemd-logind[1964]: Removed session 6. Sep 13 00:05:29.739747 systemd[1]: Started sshd@6-172.31.17.100:22-139.178.89.65:48024.service - OpenSSH per-connection server daemon (139.178.89.65:48024). Sep 13 00:05:29.898465 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 48024 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:05:29.900762 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:29.905566 systemd-logind[1964]: New session 7 of user core. Sep 13 00:05:29.911784 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:05:30.009912 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:05:30.010201 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:05:30.626889 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:05:30.628732 (dockerd)[2317]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:05:32.026821 systemd-resolved[1904]: Clock change detected. Flushing caches. Sep 13 00:05:32.595821 dockerd[2317]: time="2025-09-13T00:05:32.595524326Z" level=info msg="Starting up" Sep 13 00:05:32.836002 dockerd[2317]: time="2025-09-13T00:05:32.835947689Z" level=info msg="Loading containers: start." Sep 13 00:05:32.960437 kernel: Initializing XFRM netlink socket Sep 13 00:05:32.991518 (udev-worker)[2343]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:05:33.060253 systemd-networkd[1903]: docker0: Link UP Sep 13 00:05:33.075132 dockerd[2317]: time="2025-09-13T00:05:33.075077756Z" level=info msg="Loading containers: done." Sep 13 00:05:33.094743 dockerd[2317]: time="2025-09-13T00:05:33.094671788Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:05:33.094931 dockerd[2317]: time="2025-09-13T00:05:33.094795929Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:05:33.094931 dockerd[2317]: time="2025-09-13T00:05:33.094918366Z" level=info msg="Daemon has completed initialization" Sep 13 00:05:33.137007 dockerd[2317]: time="2025-09-13T00:05:33.136936727Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:05:33.137570 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:05:34.385894 containerd[1977]: time="2025-09-13T00:05:34.385849176Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:05:34.938430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423938044.mount: Deactivated successfully. Sep 13 00:05:36.347271 containerd[1977]: time="2025-09-13T00:05:36.347208316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:36.348206 containerd[1977]: time="2025-09-13T00:05:36.348166505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:05:36.349457 containerd[1977]: time="2025-09-13T00:05:36.349087141Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:36.351717 containerd[1977]: time="2025-09-13T00:05:36.351685758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:36.352867 containerd[1977]: time="2025-09-13T00:05:36.352776410Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.966886491s" Sep 13 00:05:36.352867 containerd[1977]: time="2025-09-13T00:05:36.352816781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:05:36.353395 containerd[1977]: time="2025-09-13T00:05:36.353363567Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:05:37.940663 containerd[1977]: time="2025-09-13T00:05:37.940605599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:37.942067 containerd[1977]: time="2025-09-13T00:05:37.942014837Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:05:37.943456 containerd[1977]: time="2025-09-13T00:05:37.943003872Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:37.946057 containerd[1977]: time="2025-09-13T00:05:37.946012860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:37.947303 containerd[1977]: time="2025-09-13T00:05:37.947262827Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.593864601s" Sep 13 00:05:37.947401 containerd[1977]: time="2025-09-13T00:05:37.947308081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:05:37.947855 containerd[1977]: time="2025-09-13T00:05:37.947829167Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:05:39.275072 containerd[1977]: time="2025-09-13T00:05:39.275016695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:39.283395 containerd[1977]: time="2025-09-13T00:05:39.283328247Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:05:39.293983 containerd[1977]: time="2025-09-13T00:05:39.293904485Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:39.308608 containerd[1977]: time="2025-09-13T00:05:39.308516836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:39.309895 containerd[1977]: time="2025-09-13T00:05:39.309766053Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.361898715s" Sep 13 00:05:39.309895 containerd[1977]: time="2025-09-13T00:05:39.309803046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:05:39.310892 containerd[1977]: time="2025-09-13T00:05:39.310817545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:05:40.005909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:40.015716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:40.303762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:40.306216 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:05:40.376441 kubelet[2536]: E0913 00:05:40.376173 2536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:05:40.382544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:05:40.382744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:05:40.599877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647370649.mount: Deactivated successfully. Sep 13 00:05:41.160546 containerd[1977]: time="2025-09-13T00:05:41.160495537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:41.161608 containerd[1977]: time="2025-09-13T00:05:41.161432261Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:05:41.162585 containerd[1977]: time="2025-09-13T00:05:41.162553103Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:41.164984 containerd[1977]: time="2025-09-13T00:05:41.164919402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:41.165805 containerd[1977]: time="2025-09-13T00:05:41.165636418Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.854761083s" Sep 13 00:05:41.165805 containerd[1977]: time="2025-09-13T00:05:41.165678648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:05:41.166376 containerd[1977]: time="2025-09-13T00:05:41.166341976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:05:41.611604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592190468.mount: Deactivated successfully. Sep 13 00:05:42.703266 containerd[1977]: time="2025-09-13T00:05:42.703207616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.704388 containerd[1977]: time="2025-09-13T00:05:42.704344004Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:05:42.705527 containerd[1977]: time="2025-09-13T00:05:42.705466258Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.708908 containerd[1977]: time="2025-09-13T00:05:42.708234501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:42.709643 containerd[1977]: time="2025-09-13T00:05:42.709600747Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.543221676s" Sep 13 00:05:42.709643 containerd[1977]: time="2025-09-13T00:05:42.709639551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:05:42.710166 containerd[1977]: time="2025-09-13T00:05:42.710140884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:05:43.149524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788511209.mount: Deactivated successfully. Sep 13 00:05:43.155747 containerd[1977]: time="2025-09-13T00:05:43.155687716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:43.157018 containerd[1977]: time="2025-09-13T00:05:43.156954768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:05:43.159438 containerd[1977]: time="2025-09-13T00:05:43.157941270Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:43.160607 containerd[1977]: time="2025-09-13T00:05:43.160570527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:43.161601 containerd[1977]: time="2025-09-13T00:05:43.161575812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 451.404561ms" Sep 13 00:05:43.161707 containerd[1977]: time="2025-09-13T00:05:43.161693718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:05:43.162629 containerd[1977]: time="2025-09-13T00:05:43.162605896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:05:43.632263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019065694.mount: Deactivated successfully. Sep 13 00:05:46.002180 containerd[1977]: time="2025-09-13T00:05:46.002120047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.007308 containerd[1977]: time="2025-09-13T00:05:46.007239938Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:05:46.011595 containerd[1977]: time="2025-09-13T00:05:46.011377252Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.017169 containerd[1977]: time="2025-09-13T00:05:46.017090035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:05:46.018970 containerd[1977]: time="2025-09-13T00:05:46.018918874Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.856284409s" Sep 13 00:05:46.018970 containerd[1977]: time="2025-09-13T00:05:46.018967923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:05:48.836134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:48.842758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:48.883616 systemd[1]: Reloading requested from client PID 2686 ('systemctl') (unit session-7.scope)... Sep 13 00:05:48.883634 systemd[1]: Reloading... Sep 13 00:05:49.009437 zram_generator::config[2730]: No configuration found. Sep 13 00:05:49.168879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:49.255929 systemd[1]: Reloading finished in 371 ms. Sep 13 00:05:49.312080 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:05:49.312197 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:05:49.312565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:49.317934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:49.534885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:49.545830 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:49.591930 kubelet[2791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:49.592449 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:49.592449 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:49.595170 kubelet[2791]: I0913 00:05:49.595096 2791 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:50.098678 kubelet[2791]: I0913 00:05:50.098631 2791 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:50.098678 kubelet[2791]: I0913 00:05:50.098667 2791 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:50.099068 kubelet[2791]: I0913 00:05:50.099045 2791 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:50.152319 kubelet[2791]: E0913 00:05:50.152273 2791 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:50.154427 kubelet[2791]: I0913 00:05:50.154355 2791 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:50.170695 kubelet[2791]: E0913 00:05:50.170600 2791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:50.170695 kubelet[2791]: I0913 00:05:50.170631 2791 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:50.178447 kubelet[2791]: I0913 00:05:50.178392 2791 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:50.181196 kubelet[2791]: I0913 00:05:50.181137 2791 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:50.181480 kubelet[2791]: I0913 00:05:50.181426 2791 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:50.181771 kubelet[2791]: I0913 00:05:50.181475 2791 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:05:50.181937 kubelet[2791]: I0913 00:05:50.181778 2791 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:50.181937 kubelet[2791]: I0913 00:05:50.181796 2791 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:50.182028 kubelet[2791]: I0913 00:05:50.181963 2791 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:50.189487 kubelet[2791]: I0913 00:05:50.189376 2791 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:50.189487 kubelet[2791]: I0913 00:05:50.189485 2791 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:50.191795 kubelet[2791]: I0913 00:05:50.191748 2791 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:50.191795 kubelet[2791]: I0913 00:05:50.191797 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:50.200387 kubelet[2791]: W0913 00:05:50.200333 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-100&limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:50.200740 kubelet[2791]: E0913 00:05:50.200570 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-100&limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:50.200740 kubelet[2791]: I0913 00:05:50.200732 2791 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:50.204560 kubelet[2791]: I0913 00:05:50.204530 2791 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:50.205657 kubelet[2791]: W0913 00:05:50.205600 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:50.205745 kubelet[2791]: E0913 00:05:50.205659 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:50.207046 kubelet[2791]: W0913 00:05:50.207017 2791 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:05:50.207821 kubelet[2791]: I0913 00:05:50.207730 2791 server.go:1274] "Started kubelet" Sep 13 00:05:50.209002 kubelet[2791]: I0913 00:05:50.208971 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:50.216080 kubelet[2791]: I0913 00:05:50.216028 2791 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:50.218058 kubelet[2791]: I0913 00:05:50.217070 2791 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:50.231590 kubelet[2791]: E0913 00:05:50.227943 2791 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.100:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-100.1864aecd080977ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-100,UID:ip-172-31-17-100,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-100,},FirstTimestamp:2025-09-13 00:05:50.20770302 +0000 UTC m=+0.658052476,LastTimestamp:2025-09-13 00:05:50.20770302 +0000 UTC m=+0.658052476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-100,}" Sep 13 00:05:50.234956 kubelet[2791]: I0913 00:05:50.233852 2791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:50.234956 kubelet[2791]: I0913 00:05:50.234163 2791 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:50.234956 kubelet[2791]: I0913 00:05:50.234593 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:50.241219 kubelet[2791]: E0913 00:05:50.236561 2791 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-100\" not found" Sep 13 00:05:50.241219 kubelet[2791]: I0913 00:05:50.236623 2791 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:50.241219 kubelet[2791]: I0913 00:05:50.236991 2791 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:50.241219 kubelet[2791]: I0913 00:05:50.237059 2791 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:50.241219 kubelet[2791]: E0913 00:05:50.238816 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-100?timeout=10s\": dial tcp 172.31.17.100:6443: connect: connection refused" interval="200ms" Sep 13 00:05:50.241445 kubelet[2791]: I0913 00:05:50.241365 2791 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:50.241530 kubelet[2791]: I0913 00:05:50.241500 2791 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:50.242922 kubelet[2791]: W0913 00:05:50.242799 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:50.242922 kubelet[2791]: E0913 00:05:50.242875 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:50.245470 kubelet[2791]: E0913 00:05:50.244818 2791 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:50.245470 kubelet[2791]: I0913 00:05:50.245091 2791 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:50.258274 kubelet[2791]: I0913 00:05:50.257467 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:50.259845 kubelet[2791]: I0913 00:05:50.259802 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:50.259845 kubelet[2791]: I0913 00:05:50.259838 2791 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:50.259984 kubelet[2791]: I0913 00:05:50.259864 2791 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:50.259984 kubelet[2791]: E0913 00:05:50.259912 2791 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:50.269435 kubelet[2791]: W0913 00:05:50.268399 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:50.269435 kubelet[2791]: E0913 00:05:50.268523 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:50.276902 kubelet[2791]: I0913 00:05:50.276872 2791 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:50.276902 kubelet[2791]: I0913 00:05:50.276890 2791 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:50.276902 kubelet[2791]: I0913 00:05:50.276911 2791 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:50.281590 kubelet[2791]: I0913 00:05:50.281560 2791 policy_none.go:49] "None policy: Start" Sep 13 00:05:50.282313 kubelet[2791]: I0913 00:05:50.282288 2791 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:50.282313 kubelet[2791]: I0913 00:05:50.282313 2791 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:50.289850 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:05:50.302806 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:05:50.305995 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:05:50.314447 kubelet[2791]: I0913 00:05:50.314419 2791 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:50.315339 kubelet[2791]: I0913 00:05:50.314609 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:50.315339 kubelet[2791]: I0913 00:05:50.314620 2791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:50.315339 kubelet[2791]: I0913 00:05:50.314882 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:50.316553 kubelet[2791]: E0913 00:05:50.316536 2791 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-100\" not found" Sep 13 00:05:50.372565 systemd[1]: Created slice kubepods-burstable-pod710b7a4b1ac7689e0063faf08b978236.slice - libcontainer container kubepods-burstable-pod710b7a4b1ac7689e0063faf08b978236.slice. Sep 13 00:05:50.400783 systemd[1]: Created slice kubepods-burstable-pod25dfa2ac38962f69bb14ddc7d7a9dc39.slice - libcontainer container kubepods-burstable-pod25dfa2ac38962f69bb14ddc7d7a9dc39.slice. Sep 13 00:05:50.416612 kubelet[2791]: I0913 00:05:50.416562 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-100" Sep 13 00:05:50.417066 kubelet[2791]: E0913 00:05:50.417035 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.100:6443/api/v1/nodes\": dial tcp 172.31.17.100:6443: connect: connection refused" node="ip-172-31-17-100" Sep 13 00:05:50.418462 systemd[1]: Created slice kubepods-burstable-pod30c7281e0fa5422b9e68c24321ae286c.slice - libcontainer container kubepods-burstable-pod30c7281e0fa5422b9e68c24321ae286c.slice. Sep 13 00:05:50.439550 kubelet[2791]: E0913 00:05:50.439500 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-100?timeout=10s\": dial tcp 172.31.17.100:6443: connect: connection refused" interval="400ms" Sep 13 00:05:50.538179 kubelet[2791]: I0913 00:05:50.538100 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/710b7a4b1ac7689e0063faf08b978236-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-100\" (UID: \"710b7a4b1ac7689e0063faf08b978236\") " pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:50.538179 kubelet[2791]: I0913 00:05:50.538144 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/710b7a4b1ac7689e0063faf08b978236-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-100\" (UID: \"710b7a4b1ac7689e0063faf08b978236\") " pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:50.538179 kubelet[2791]: I0913 00:05:50.538166 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:50.538179 kubelet[2791]: I0913 00:05:50.538183 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:50.538494 kubelet[2791]: I0913 00:05:50.538199 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30c7281e0fa5422b9e68c24321ae286c-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-100\" (UID: \"30c7281e0fa5422b9e68c24321ae286c\") " pod="kube-system/kube-scheduler-ip-172-31-17-100" Sep 13 00:05:50.538494 kubelet[2791]: I0913 00:05:50.538215 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/710b7a4b1ac7689e0063faf08b978236-ca-certs\") pod \"kube-apiserver-ip-172-31-17-100\" (UID: \"710b7a4b1ac7689e0063faf08b978236\") " pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:50.538494 kubelet[2791]: I0913 00:05:50.538231 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:50.538494 kubelet[2791]: I0913 00:05:50.538249 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:50.538494 kubelet[2791]: I0913 00:05:50.538264 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:50.619356 kubelet[2791]: I0913 00:05:50.619320 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-100" Sep 13 00:05:50.619876 kubelet[2791]: E0913 00:05:50.619675 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.100:6443/api/v1/nodes\": dial tcp 172.31.17.100:6443: connect: connection refused" node="ip-172-31-17-100" Sep 13 00:05:50.699354 containerd[1977]: time="2025-09-13T00:05:50.699246701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-100,Uid:710b7a4b1ac7689e0063faf08b978236,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.726436 containerd[1977]: time="2025-09-13T00:05:50.724031343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-100,Uid:30c7281e0fa5422b9e68c24321ae286c,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.727489 containerd[1977]: time="2025-09-13T00:05:50.724031433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-100,Uid:25dfa2ac38962f69bb14ddc7d7a9dc39,Namespace:kube-system,Attempt:0,}" Sep 13 00:05:50.841090 kubelet[2791]: E0913 00:05:50.841050 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-100?timeout=10s\": dial tcp 172.31.17.100:6443: connect: connection refused" interval="800ms" Sep 13 00:05:51.022237 kubelet[2791]: I0913 00:05:51.022130 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-100" Sep 13 00:05:51.022498 kubelet[2791]: E0913 00:05:51.022471 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.100:6443/api/v1/nodes\": dial tcp 172.31.17.100:6443: connect: connection refused" node="ip-172-31-17-100" Sep 13 00:05:51.174927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250110204.mount: Deactivated successfully. Sep 13 00:05:51.190173 containerd[1977]: time="2025-09-13T00:05:51.190117799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:51.191905 containerd[1977]: time="2025-09-13T00:05:51.191849341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:05:51.193953 containerd[1977]: time="2025-09-13T00:05:51.193909468Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:51.195874 containerd[1977]: time="2025-09-13T00:05:51.195829108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:51.197888 containerd[1977]: time="2025-09-13T00:05:51.197834019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:05:51.200210 containerd[1977]: time="2025-09-13T00:05:51.200164094Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:51.202044 containerd[1977]: time="2025-09-13T00:05:51.201972347Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:05:51.205213 containerd[1977]: time="2025-09-13T00:05:51.205186442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:05:51.205826 containerd[1977]: time="2025-09-13T00:05:51.205767457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.141393ms" Sep 13 00:05:51.220327 containerd[1977]: time="2025-09-13T00:05:51.220051651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.937493ms" Sep 13 00:05:51.222893 containerd[1977]: time="2025-09-13T00:05:51.222713041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.389103ms" Sep 13 00:05:51.416988 containerd[1977]: time="2025-09-13T00:05:51.416647199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:51.418657 containerd[1977]: time="2025-09-13T00:05:51.418064475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:51.418657 containerd[1977]: time="2025-09-13T00:05:51.418085801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.418657 containerd[1977]: time="2025-09-13T00:05:51.418175107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.422262 containerd[1977]: time="2025-09-13T00:05:51.421114531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:51.422262 containerd[1977]: time="2025-09-13T00:05:51.421168595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:51.422262 containerd[1977]: time="2025-09-13T00:05:51.421184274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.422262 containerd[1977]: time="2025-09-13T00:05:51.421261444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.427592 containerd[1977]: time="2025-09-13T00:05:51.427321368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:05:51.427592 containerd[1977]: time="2025-09-13T00:05:51.427368507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:05:51.427592 containerd[1977]: time="2025-09-13T00:05:51.427384973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.427592 containerd[1977]: time="2025-09-13T00:05:51.427477009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:05:51.447745 kubelet[2791]: W0913 00:05:51.447697 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:51.448285 kubelet[2791]: E0913 00:05:51.448133 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.451803 systemd[1]: Started cri-containerd-6d72d33c04e0a3ffc8ecabd6062295e4e3a44b52873ed9c29d8177faf38756bf.scope - libcontainer container 6d72d33c04e0a3ffc8ecabd6062295e4e3a44b52873ed9c29d8177faf38756bf. Sep 13 00:05:51.458003 systemd[1]: Started cri-containerd-0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54.scope - libcontainer container 0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54. Sep 13 00:05:51.460867 systemd[1]: Started cri-containerd-34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54.scope - libcontainer container 34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54. Sep 13 00:05:51.559092 containerd[1977]: time="2025-09-13T00:05:51.558735896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-100,Uid:710b7a4b1ac7689e0063faf08b978236,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d72d33c04e0a3ffc8ecabd6062295e4e3a44b52873ed9c29d8177faf38756bf\"" Sep 13 00:05:51.573510 containerd[1977]: time="2025-09-13T00:05:51.573358195Z" level=info msg="CreateContainer within sandbox \"6d72d33c04e0a3ffc8ecabd6062295e4e3a44b52873ed9c29d8177faf38756bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:05:51.590245 containerd[1977]: time="2025-09-13T00:05:51.589708086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-100,Uid:25dfa2ac38962f69bb14ddc7d7a9dc39,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54\"" Sep 13 00:05:51.600093 containerd[1977]: time="2025-09-13T00:05:51.600047434Z" level=info msg="CreateContainer within sandbox \"0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:05:51.600814 containerd[1977]: time="2025-09-13T00:05:51.600778043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-100,Uid:30c7281e0fa5422b9e68c24321ae286c,Namespace:kube-system,Attempt:0,} returns sandbox id \"34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54\"" Sep 13 00:05:51.605306 containerd[1977]: time="2025-09-13T00:05:51.605264585Z" level=info msg="CreateContainer within sandbox \"34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:05:51.626480 kubelet[2791]: W0913 00:05:51.626359 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-100&limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:51.626480 kubelet[2791]: E0913 00:05:51.626473 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-100&limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.628809 containerd[1977]: time="2025-09-13T00:05:51.628595629Z" level=info msg="CreateContainer within sandbox \"6d72d33c04e0a3ffc8ecabd6062295e4e3a44b52873ed9c29d8177faf38756bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a6e029547406aa5a5457d496362c65f45833b75713a53194e5916d375d8fd8c\"" Sep 13 00:05:51.629733 containerd[1977]: time="2025-09-13T00:05:51.629701282Z" level=info msg="StartContainer for \"3a6e029547406aa5a5457d496362c65f45833b75713a53194e5916d375d8fd8c\"" Sep 13 00:05:51.635768 containerd[1977]: time="2025-09-13T00:05:51.635598555Z" level=info msg="CreateContainer within sandbox \"0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8\"" Sep 13 00:05:51.636702 containerd[1977]: time="2025-09-13T00:05:51.636573074Z" level=info msg="StartContainer for \"5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8\"" Sep 13 00:05:51.642952 kubelet[2791]: E0913 00:05:51.642362 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-100?timeout=10s\": dial tcp 172.31.17.100:6443: connect: connection refused" interval="1.6s" Sep 13 00:05:51.656175 containerd[1977]: time="2025-09-13T00:05:51.656130667Z" level=info msg="CreateContainer within sandbox \"34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857\"" Sep 13 00:05:51.657932 containerd[1977]: time="2025-09-13T00:05:51.657713275Z" level=info msg="StartContainer for \"33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857\"" Sep 13 00:05:51.681174 systemd[1]: Started cri-containerd-3a6e029547406aa5a5457d496362c65f45833b75713a53194e5916d375d8fd8c.scope - libcontainer container 3a6e029547406aa5a5457d496362c65f45833b75713a53194e5916d375d8fd8c. Sep 13 00:05:51.697171 systemd[1]: Started cri-containerd-5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8.scope - libcontainer container 5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8. Sep 13 00:05:51.726647 systemd[1]: Started cri-containerd-33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857.scope - libcontainer container 33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857. Sep 13 00:05:51.774556 containerd[1977]: time="2025-09-13T00:05:51.774456719Z" level=info msg="StartContainer for \"3a6e029547406aa5a5457d496362c65f45833b75713a53194e5916d375d8fd8c\" returns successfully" Sep 13 00:05:51.795229 kubelet[2791]: W0913 00:05:51.793310 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:51.798022 kubelet[2791]: E0913 00:05:51.797492 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.798022 kubelet[2791]: W0913 00:05:51.797928 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.100:6443: connect: connection refused Sep 13 00:05:51.798022 kubelet[2791]: E0913 00:05:51.797995 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:51.806731 containerd[1977]: time="2025-09-13T00:05:51.806690840Z" level=info msg="StartContainer for \"5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8\" returns successfully" Sep 13 00:05:51.825370 kubelet[2791]: I0913 00:05:51.825321 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-100" Sep 13 00:05:51.826149 containerd[1977]: time="2025-09-13T00:05:51.825739053Z" level=info msg="StartContainer for \"33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857\" returns successfully" Sep 13 00:05:51.826710 kubelet[2791]: E0913 00:05:51.826446 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.100:6443/api/v1/nodes\": dial tcp 172.31.17.100:6443: connect: connection refused" node="ip-172-31-17-100" Sep 13 00:05:52.243182 kubelet[2791]: E0913 00:05:52.243134 2791 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.100:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:05:53.428971 kubelet[2791]: I0913 00:05:53.428877 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-100" Sep 13 00:05:54.514121 kubelet[2791]: E0913 00:05:54.514066 2791 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-100\" not found" node="ip-172-31-17-100" Sep 13 00:05:54.657347 kubelet[2791]: I0913 00:05:54.657157 2791 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-100" Sep 13 00:05:54.657347 kubelet[2791]: E0913 00:05:54.657203 2791 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-100\": node \"ip-172-31-17-100\" not found" Sep 13 00:05:54.849198 kubelet[2791]: E0913 00:05:54.849058 2791 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-100\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:55.206813 kubelet[2791]: I0913 00:05:55.206777 2791 apiserver.go:52] "Watching apiserver" Sep 13 00:05:55.237545 kubelet[2791]: I0913 00:05:55.237503 2791 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:55.755428 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:05:56.504148 systemd[1]: Reloading requested from client PID 3067 ('systemctl') (unit session-7.scope)... Sep 13 00:05:56.504164 systemd[1]: Reloading... Sep 13 00:05:56.594991 zram_generator::config[3106]: No configuration found. Sep 13 00:05:56.732853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:56.834217 systemd[1]: Reloading finished in 329 ms. Sep 13 00:05:56.879351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:56.897919 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:05:56.898133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:56.898190 systemd[1]: kubelet.service: Consumed 1.030s CPU time, 125.1M memory peak, 0B memory swap peak. Sep 13 00:05:56.906715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:05:57.141707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:05:57.153919 (kubelet)[3167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:05:57.234081 kubelet[3167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:57.235340 kubelet[3167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:05:57.235647 kubelet[3167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:05:57.235647 kubelet[3167]: I0913 00:05:57.235529 3167 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:05:57.242685 kubelet[3167]: I0913 00:05:57.242659 3167 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:05:57.242905 kubelet[3167]: I0913 00:05:57.242805 3167 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:05:57.243331 kubelet[3167]: I0913 00:05:57.243191 3167 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:05:57.244755 kubelet[3167]: I0913 00:05:57.244734 3167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:05:57.251179 kubelet[3167]: I0913 00:05:57.251008 3167 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:05:57.256569 kubelet[3167]: E0913 00:05:57.256396 3167 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:05:57.256569 kubelet[3167]: I0913 00:05:57.256452 3167 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:05:57.264449 kubelet[3167]: I0913 00:05:57.263845 3167 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:05:57.265810 kubelet[3167]: I0913 00:05:57.265774 3167 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:05:57.268141 kubelet[3167]: I0913 00:05:57.268095 3167 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:05:57.268750 kubelet[3167]: I0913 00:05:57.268289 3167 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-100","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:05:57.268750 kubelet[3167]: I0913 00:05:57.268505 3167 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:05:57.268750 kubelet[3167]: I0913 00:05:57.268516 3167 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:05:57.268750 kubelet[3167]: I0913 00:05:57.268550 3167 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:57.269018 kubelet[3167]: I0913 00:05:57.269007 3167 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:05:57.269157 kubelet[3167]: I0913 00:05:57.269146 3167 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:05:57.269239 kubelet[3167]: I0913 00:05:57.269232 3167 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:05:57.269287 kubelet[3167]: I0913 00:05:57.269280 3167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:05:57.273453 kubelet[3167]: I0913 00:05:57.272655 3167 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:05:57.273453 kubelet[3167]: I0913 00:05:57.273230 3167 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:05:57.275834 kubelet[3167]: I0913 00:05:57.275813 3167 server.go:1274] "Started kubelet" Sep 13 00:05:57.279543 kubelet[3167]: I0913 00:05:57.278992 3167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:05:57.293430 kubelet[3167]: I0913 00:05:57.293370 3167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:05:57.295916 kubelet[3167]: I0913 00:05:57.295167 3167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:05:57.295916 kubelet[3167]: I0913 00:05:57.295612 3167 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:05:57.296087 kubelet[3167]: I0913 00:05:57.296005 3167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:05:57.298700 kubelet[3167]: I0913 00:05:57.298679 3167 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:05:57.299027 kubelet[3167]: E0913 00:05:57.299008 3167 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-100\" not found" Sep 13 00:05:57.299912 kubelet[3167]: I0913 00:05:57.299892 3167 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:05:57.300044 kubelet[3167]: I0913 00:05:57.300032 3167 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:05:57.306064 kubelet[3167]: I0913 00:05:57.306034 3167 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:05:57.314877 kubelet[3167]: I0913 00:05:57.314845 3167 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:05:57.319439 kubelet[3167]: I0913 00:05:57.317123 3167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:05:57.329021 kubelet[3167]: E0913 00:05:57.328987 3167 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:05:57.331473 kubelet[3167]: I0913 00:05:57.330230 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:05:57.333435 kubelet[3167]: I0913 00:05:57.330330 3167 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:05:57.341210 kubelet[3167]: I0913 00:05:57.341174 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:05:57.341484 kubelet[3167]: I0913 00:05:57.341404 3167 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:05:57.341635 kubelet[3167]: I0913 00:05:57.341622 3167 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:05:57.341840 kubelet[3167]: E0913 00:05:57.341812 3167 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:05:57.393237 kubelet[3167]: I0913 00:05:57.392821 3167 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:05:57.393237 kubelet[3167]: I0913 00:05:57.392844 3167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:05:57.393237 kubelet[3167]: I0913 00:05:57.392867 3167 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:05:57.393237 kubelet[3167]: I0913 00:05:57.393059 3167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:05:57.393237 kubelet[3167]: I0913 00:05:57.393075 3167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:05:57.393237 kubelet[3167]: I0913 00:05:57.393101 3167 policy_none.go:49] "None policy: Start" Sep 13 00:05:57.395649 kubelet[3167]: I0913 00:05:57.395621 3167 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:05:57.395649 kubelet[3167]: I0913 00:05:57.395652 3167 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:05:57.395962 kubelet[3167]: I0913 00:05:57.395940 3167 state_mem.go:75] "Updated machine memory state" Sep 13 00:05:57.401938 kubelet[3167]: I0913 00:05:57.401912 3167 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:05:57.402472 kubelet[3167]: I0913 00:05:57.402340 3167 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:05:57.402472 kubelet[3167]: I0913 00:05:57.402358 3167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:05:57.404175 kubelet[3167]: I0913 00:05:57.402670 3167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:05:57.517402 kubelet[3167]: I0913 00:05:57.517152 3167 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-100" Sep 13 00:05:57.527042 kubelet[3167]: I0913 00:05:57.527004 3167 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-100" Sep 13 00:05:57.527209 kubelet[3167]: I0913 00:05:57.527098 3167 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-100" Sep 13 00:05:57.601030 kubelet[3167]: I0913 00:05:57.600975 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/710b7a4b1ac7689e0063faf08b978236-ca-certs\") pod \"kube-apiserver-ip-172-31-17-100\" (UID: \"710b7a4b1ac7689e0063faf08b978236\") " pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:57.601030 kubelet[3167]: I0913 00:05:57.601019 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:57.601030 kubelet[3167]: I0913 00:05:57.601039 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:57.601248 kubelet[3167]: I0913 00:05:57.601056 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:57.601248 kubelet[3167]: I0913 00:05:57.601073 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:57.601248 kubelet[3167]: I0913 00:05:57.601105 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30c7281e0fa5422b9e68c24321ae286c-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-100\" (UID: \"30c7281e0fa5422b9e68c24321ae286c\") " pod="kube-system/kube-scheduler-ip-172-31-17-100" Sep 13 00:05:57.601248 kubelet[3167]: I0913 00:05:57.601120 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/710b7a4b1ac7689e0063faf08b978236-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-100\" (UID: \"710b7a4b1ac7689e0063faf08b978236\") " pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:57.601248 kubelet[3167]: I0913 00:05:57.601135 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/710b7a4b1ac7689e0063faf08b978236-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-100\" (UID: \"710b7a4b1ac7689e0063faf08b978236\") " pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:57.601384 kubelet[3167]: I0913 00:05:57.601149 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25dfa2ac38962f69bb14ddc7d7a9dc39-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-100\" (UID: \"25dfa2ac38962f69bb14ddc7d7a9dc39\") " pod="kube-system/kube-controller-manager-ip-172-31-17-100" Sep 13 00:05:58.283004 kubelet[3167]: I0913 00:05:58.282767 3167 apiserver.go:52] "Watching apiserver" Sep 13 00:05:58.300115 kubelet[3167]: I0913 00:05:58.300062 3167 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:05:58.395284 kubelet[3167]: E0913 00:05:58.395078 3167 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-100\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-100" Sep 13 00:05:58.438330 kubelet[3167]: I0913 00:05:58.438259 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-100" podStartSLOduration=1.438238951 podStartE2EDuration="1.438238951s" podCreationTimestamp="2025-09-13 00:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:58.419939105 +0000 UTC m=+1.258909147" watchObservedRunningTime="2025-09-13 00:05:58.438238951 +0000 UTC m=+1.277208992" Sep 13 00:05:58.452126 kubelet[3167]: I0913 00:05:58.452051 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-100" podStartSLOduration=1.452034564 podStartE2EDuration="1.452034564s" podCreationTimestamp="2025-09-13 00:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:58.439332194 +0000 UTC m=+1.278302237" watchObservedRunningTime="2025-09-13 00:05:58.452034564 +0000 UTC m=+1.291004605" Sep 13 00:05:58.464541 kubelet[3167]: I0913 00:05:58.464484 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-100" podStartSLOduration=1.464464986 podStartE2EDuration="1.464464986s" podCreationTimestamp="2025-09-13 00:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:05:58.452523993 +0000 UTC m=+1.291494035" watchObservedRunningTime="2025-09-13 00:05:58.464464986 +0000 UTC m=+1.303435095" Sep 13 00:06:02.937953 kubelet[3167]: I0913 00:06:02.937687 3167 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:06:02.938963 containerd[1977]: time="2025-09-13T00:06:02.938923014Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:06:02.940857 kubelet[3167]: I0913 00:06:02.939210 3167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:06:03.626950 systemd[1]: Created slice kubepods-besteffort-pod9b188f1b_8bdb_48d1_9701_f358042a9d8f.slice - libcontainer container kubepods-besteffort-pod9b188f1b_8bdb_48d1_9701_f358042a9d8f.slice. Sep 13 00:06:03.786213 kubelet[3167]: I0913 00:06:03.779433 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b188f1b-8bdb-48d1-9701-f358042a9d8f-kube-proxy\") pod \"kube-proxy-7sc2g\" (UID: \"9b188f1b-8bdb-48d1-9701-f358042a9d8f\") " pod="kube-system/kube-proxy-7sc2g" Sep 13 00:06:03.786213 kubelet[3167]: I0913 00:06:03.779489 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b188f1b-8bdb-48d1-9701-f358042a9d8f-lib-modules\") pod \"kube-proxy-7sc2g\" (UID: \"9b188f1b-8bdb-48d1-9701-f358042a9d8f\") " pod="kube-system/kube-proxy-7sc2g" Sep 13 00:06:03.786213 kubelet[3167]: I0913 00:06:03.779515 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5tw\" (UniqueName: \"kubernetes.io/projected/9b188f1b-8bdb-48d1-9701-f358042a9d8f-kube-api-access-lz5tw\") pod \"kube-proxy-7sc2g\" (UID: \"9b188f1b-8bdb-48d1-9701-f358042a9d8f\") " pod="kube-system/kube-proxy-7sc2g" Sep 13 00:06:03.786213 kubelet[3167]: I0913 00:06:03.779545 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b188f1b-8bdb-48d1-9701-f358042a9d8f-xtables-lock\") pod \"kube-proxy-7sc2g\" (UID: \"9b188f1b-8bdb-48d1-9701-f358042a9d8f\") " pod="kube-system/kube-proxy-7sc2g" Sep 13 00:06:03.962536 containerd[1977]: time="2025-09-13T00:06:03.956806691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sc2g,Uid:9b188f1b-8bdb-48d1-9701-f358042a9d8f,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:03.964456 systemd[1]: Created slice kubepods-besteffort-pod12063c01_def6_410d_b8db_72b900710b9b.slice - libcontainer container kubepods-besteffort-pod12063c01_def6_410d_b8db_72b900710b9b.slice. Sep 13 00:06:04.054089 containerd[1977]: time="2025-09-13T00:06:04.053630693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:04.056360 containerd[1977]: time="2025-09-13T00:06:04.054201317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:04.060217 containerd[1977]: time="2025-09-13T00:06:04.060106643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.060894 containerd[1977]: time="2025-09-13T00:06:04.060800870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.084317 kubelet[3167]: I0913 00:06:04.084164 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvq5\" (UniqueName: \"kubernetes.io/projected/12063c01-def6-410d-b8db-72b900710b9b-kube-api-access-4lvq5\") pod \"tigera-operator-58fc44c59b-tphlt\" (UID: \"12063c01-def6-410d-b8db-72b900710b9b\") " pod="tigera-operator/tigera-operator-58fc44c59b-tphlt" Sep 13 00:06:04.084317 kubelet[3167]: I0913 00:06:04.084226 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12063c01-def6-410d-b8db-72b900710b9b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-tphlt\" (UID: \"12063c01-def6-410d-b8db-72b900710b9b\") " pod="tigera-operator/tigera-operator-58fc44c59b-tphlt" Sep 13 00:06:04.148742 systemd[1]: Started cri-containerd-e656a8139b0854140f5009da84c9198553a3c409e110d2719706cdff88fb3cf9.scope - libcontainer container e656a8139b0854140f5009da84c9198553a3c409e110d2719706cdff88fb3cf9. Sep 13 00:06:04.243079 containerd[1977]: time="2025-09-13T00:06:04.242107635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sc2g,Uid:9b188f1b-8bdb-48d1-9701-f358042a9d8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e656a8139b0854140f5009da84c9198553a3c409e110d2719706cdff88fb3cf9\"" Sep 13 00:06:04.247795 containerd[1977]: time="2025-09-13T00:06:04.247603950Z" level=info msg="CreateContainer within sandbox \"e656a8139b0854140f5009da84c9198553a3c409e110d2719706cdff88fb3cf9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:06:04.277344 containerd[1977]: time="2025-09-13T00:06:04.277299534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tphlt,Uid:12063c01-def6-410d-b8db-72b900710b9b,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:06:04.307606 containerd[1977]: time="2025-09-13T00:06:04.307548563Z" level=info msg="CreateContainer within sandbox \"e656a8139b0854140f5009da84c9198553a3c409e110d2719706cdff88fb3cf9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9b0f0b8bcbd83c8a0ae4ad1bce08cc5ab6484bb4074d55dd738ff137c6af22a\"" Sep 13 00:06:04.312498 containerd[1977]: time="2025-09-13T00:06:04.310580095Z" level=info msg="StartContainer for \"e9b0f0b8bcbd83c8a0ae4ad1bce08cc5ab6484bb4074d55dd738ff137c6af22a\"" Sep 13 00:06:04.435936 containerd[1977]: time="2025-09-13T00:06:04.435157319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:04.435936 containerd[1977]: time="2025-09-13T00:06:04.435312107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:04.435936 containerd[1977]: time="2025-09-13T00:06:04.435448418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.435936 containerd[1977]: time="2025-09-13T00:06:04.435866708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:04.453206 systemd[1]: Started cri-containerd-e9b0f0b8bcbd83c8a0ae4ad1bce08cc5ab6484bb4074d55dd738ff137c6af22a.scope - libcontainer container e9b0f0b8bcbd83c8a0ae4ad1bce08cc5ab6484bb4074d55dd738ff137c6af22a. Sep 13 00:06:04.481080 systemd[1]: Started cri-containerd-bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a.scope - libcontainer container bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a. Sep 13 00:06:04.521320 containerd[1977]: time="2025-09-13T00:06:04.521046728Z" level=info msg="StartContainer for \"e9b0f0b8bcbd83c8a0ae4ad1bce08cc5ab6484bb4074d55dd738ff137c6af22a\" returns successfully" Sep 13 00:06:04.568394 containerd[1977]: time="2025-09-13T00:06:04.567947948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tphlt,Uid:12063c01-def6-410d-b8db-72b900710b9b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a\"" Sep 13 00:06:04.580065 containerd[1977]: time="2025-09-13T00:06:04.579784881Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:06:06.134512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131930348.mount: Deactivated successfully. Sep 13 00:06:07.334091 containerd[1977]: time="2025-09-13T00:06:07.334037225Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:07.335227 containerd[1977]: time="2025-09-13T00:06:07.335055795Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:06:07.337310 containerd[1977]: time="2025-09-13T00:06:07.336044412Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:07.360866 containerd[1977]: time="2025-09-13T00:06:07.360781834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:07.361699 containerd[1977]: time="2025-09-13T00:06:07.361489450Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.781639757s" Sep 13 00:06:07.361699 containerd[1977]: time="2025-09-13T00:06:07.361524822Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:06:07.393808 containerd[1977]: time="2025-09-13T00:06:07.393766200Z" level=info msg="CreateContainer within sandbox \"bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:06:07.408326 containerd[1977]: time="2025-09-13T00:06:07.407316735Z" level=info msg="CreateContainer within sandbox \"bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3\"" Sep 13 00:06:07.408326 containerd[1977]: time="2025-09-13T00:06:07.407868239Z" level=info msg="StartContainer for \"d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3\"" Sep 13 00:06:07.444799 systemd[1]: Started cri-containerd-d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3.scope - libcontainer container d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3. Sep 13 00:06:07.490048 containerd[1977]: time="2025-09-13T00:06:07.490009425Z" level=info msg="StartContainer for \"d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3\" returns successfully" Sep 13 00:06:08.513567 kubelet[3167]: I0913 00:06:08.513332 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7sc2g" podStartSLOduration=5.513314252 podStartE2EDuration="5.513314252s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:05.486177303 +0000 UTC m=+8.325147346" watchObservedRunningTime="2025-09-13 00:06:08.513314252 +0000 UTC m=+11.352284294" Sep 13 00:06:08.513567 kubelet[3167]: I0913 00:06:08.513474 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-tphlt" podStartSLOduration=2.715283891 podStartE2EDuration="5.513467968s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="2025-09-13 00:06:04.575668805 +0000 UTC m=+7.414638827" lastFinishedPulling="2025-09-13 00:06:07.373852871 +0000 UTC m=+10.212822904" observedRunningTime="2025-09-13 00:06:08.510907606 +0000 UTC m=+11.349877647" watchObservedRunningTime="2025-09-13 00:06:08.513467968 +0000 UTC m=+11.352438010" Sep 13 00:06:10.411344 update_engine[1967]: I20250913 00:06:10.410470 1967 update_attempter.cc:509] Updating boot flags... Sep 13 00:06:10.588455 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3524) Sep 13 00:06:10.970442 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3525) Sep 13 00:06:11.340437 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3525) Sep 13 00:06:15.023872 sudo[2302]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:15.051559 sshd[2299]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:15.061200 systemd[1]: sshd@6-172.31.17.100:22-139.178.89.65:48024.service: Deactivated successfully. Sep 13 00:06:15.069939 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:15.071684 systemd[1]: session-7.scope: Consumed 4.959s CPU time, 142.7M memory peak, 0B memory swap peak. Sep 13 00:06:15.074285 systemd-logind[1964]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:15.076380 systemd-logind[1964]: Removed session 7. Sep 13 00:06:19.599276 systemd[1]: Created slice kubepods-besteffort-pod719ee819_3bf6_485c_9ac6_cd6c7f870f67.slice - libcontainer container kubepods-besteffort-pod719ee819_3bf6_485c_9ac6_cd6c7f870f67.slice. Sep 13 00:06:19.635193 kubelet[3167]: I0913 00:06:19.635072 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/719ee819-3bf6-485c-9ac6-cd6c7f870f67-tigera-ca-bundle\") pod \"calico-typha-5465d5c94d-pf9lm\" (UID: \"719ee819-3bf6-485c-9ac6-cd6c7f870f67\") " pod="calico-system/calico-typha-5465d5c94d-pf9lm" Sep 13 00:06:19.635193 kubelet[3167]: I0913 00:06:19.635122 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/719ee819-3bf6-485c-9ac6-cd6c7f870f67-typha-certs\") pod \"calico-typha-5465d5c94d-pf9lm\" (UID: \"719ee819-3bf6-485c-9ac6-cd6c7f870f67\") " pod="calico-system/calico-typha-5465d5c94d-pf9lm" Sep 13 00:06:19.635193 kubelet[3167]: I0913 00:06:19.635142 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrjdt\" (UniqueName: \"kubernetes.io/projected/719ee819-3bf6-485c-9ac6-cd6c7f870f67-kube-api-access-qrjdt\") pod \"calico-typha-5465d5c94d-pf9lm\" (UID: \"719ee819-3bf6-485c-9ac6-cd6c7f870f67\") " pod="calico-system/calico-typha-5465d5c94d-pf9lm" Sep 13 00:06:19.820916 systemd[1]: Created slice kubepods-besteffort-pod7278499b_fe4a_4b7d_9826_5d927d46f420.slice - libcontainer container kubepods-besteffort-pod7278499b_fe4a_4b7d_9826_5d927d46f420.slice. Sep 13 00:06:19.837060 kubelet[3167]: I0913 00:06:19.836665 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-cni-net-dir\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837060 kubelet[3167]: I0913 00:06:19.836708 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-var-lib-calico\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837060 kubelet[3167]: I0913 00:06:19.836728 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9xhh\" (UniqueName: \"kubernetes.io/projected/7278499b-fe4a-4b7d-9826-5d927d46f420-kube-api-access-l9xhh\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837060 kubelet[3167]: I0913 00:06:19.836751 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7278499b-fe4a-4b7d-9826-5d927d46f420-node-certs\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837060 kubelet[3167]: I0913 00:06:19.836809 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7278499b-fe4a-4b7d-9826-5d927d46f420-tigera-ca-bundle\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837323 kubelet[3167]: I0913 00:06:19.836853 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-cni-bin-dir\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837323 kubelet[3167]: I0913 00:06:19.836868 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-xtables-lock\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837323 kubelet[3167]: I0913 00:06:19.836885 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-cni-log-dir\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837323 kubelet[3167]: I0913 00:06:19.836901 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-flexvol-driver-host\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837323 kubelet[3167]: I0913 00:06:19.836925 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-var-run-calico\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837734 kubelet[3167]: I0913 00:06:19.836941 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-policysync\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.837734 kubelet[3167]: I0913 00:06:19.836959 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7278499b-fe4a-4b7d-9826-5d927d46f420-lib-modules\") pod \"calico-node-mktlg\" (UID: \"7278499b-fe4a-4b7d-9826-5d927d46f420\") " pod="calico-system/calico-node-mktlg" Sep 13 00:06:19.909757 containerd[1977]: time="2025-09-13T00:06:19.909643541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5465d5c94d-pf9lm,Uid:719ee819-3bf6-485c-9ac6-cd6c7f870f67,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:19.950433 kubelet[3167]: E0913 00:06:19.950318 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:19.950433 kubelet[3167]: W0913 00:06:19.950358 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:19.957577 kubelet[3167]: E0913 00:06:19.954956 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:19.963614 kubelet[3167]: E0913 00:06:19.962506 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:19.963614 kubelet[3167]: W0913 00:06:19.963530 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:19.963614 kubelet[3167]: E0913 00:06:19.963565 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:19.974511 kubelet[3167]: E0913 00:06:19.969394 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:19.974511 kubelet[3167]: W0913 00:06:19.969454 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:19.974511 kubelet[3167]: E0913 00:06:19.969482 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:19.980460 containerd[1977]: time="2025-09-13T00:06:19.980026770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:19.980460 containerd[1977]: time="2025-09-13T00:06:19.980137674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:19.980460 containerd[1977]: time="2025-09-13T00:06:19.980175488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:19.980460 containerd[1977]: time="2025-09-13T00:06:19.980306179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:20.038653 systemd[1]: Started cri-containerd-0e86bdc01f0e855287a040378934e3da09373e269213fc9f316c225354ba9afc.scope - libcontainer container 0e86bdc01f0e855287a040378934e3da09373e269213fc9f316c225354ba9afc. Sep 13 00:06:20.128236 kubelet[3167]: E0913 00:06:20.128087 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:20.134168 kubelet[3167]: E0913 00:06:20.133336 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.134168 kubelet[3167]: W0913 00:06:20.133364 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.134168 kubelet[3167]: E0913 00:06:20.133390 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.135246 kubelet[3167]: E0913 00:06:20.135132 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.135246 kubelet[3167]: W0913 00:06:20.135162 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.135246 kubelet[3167]: E0913 00:06:20.135187 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.136588 kubelet[3167]: E0913 00:06:20.136288 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.136588 kubelet[3167]: W0913 00:06:20.136306 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.136588 kubelet[3167]: E0913 00:06:20.136537 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.138644 kubelet[3167]: E0913 00:06:20.137611 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.138644 kubelet[3167]: W0913 00:06:20.137844 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.138644 kubelet[3167]: E0913 00:06:20.137866 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.139615 kubelet[3167]: E0913 00:06:20.139509 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.139615 kubelet[3167]: W0913 00:06:20.139557 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.139615 kubelet[3167]: E0913 00:06:20.139578 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.140084 kubelet[3167]: E0913 00:06:20.139899 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.140084 kubelet[3167]: W0913 00:06:20.139922 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.140084 kubelet[3167]: E0913 00:06:20.139938 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.140634 kubelet[3167]: E0913 00:06:20.140615 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.140634 kubelet[3167]: W0913 00:06:20.140635 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.140755 kubelet[3167]: E0913 00:06:20.140650 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.141603 kubelet[3167]: E0913 00:06:20.141585 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.141603 kubelet[3167]: W0913 00:06:20.141604 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.141731 kubelet[3167]: E0913 00:06:20.141619 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.142657 kubelet[3167]: E0913 00:06:20.142403 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.142657 kubelet[3167]: W0913 00:06:20.142528 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.142657 kubelet[3167]: E0913 00:06:20.142545 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.142657 kubelet[3167]: I0913 00:06:20.142572 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85496fec-fe0d-4f47-8c0f-4732dc0de194-registration-dir\") pod \"csi-node-driver-hn4jz\" (UID: \"85496fec-fe0d-4f47-8c0f-4732dc0de194\") " pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:20.143425 kubelet[3167]: E0913 00:06:20.142979 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.143425 kubelet[3167]: W0913 00:06:20.142995 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.145249 kubelet[3167]: E0913 00:06:20.145157 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.145249 kubelet[3167]: I0913 00:06:20.145199 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85496fec-fe0d-4f47-8c0f-4732dc0de194-kubelet-dir\") pod \"csi-node-driver-hn4jz\" (UID: \"85496fec-fe0d-4f47-8c0f-4732dc0de194\") " pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:20.145632 kubelet[3167]: E0913 00:06:20.145499 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.145632 kubelet[3167]: W0913 00:06:20.145514 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.145632 kubelet[3167]: E0913 00:06:20.145535 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.146055 kubelet[3167]: E0913 00:06:20.145946 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.146055 kubelet[3167]: W0913 00:06:20.145961 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.146055 kubelet[3167]: E0913 00:06:20.145986 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.146518 kubelet[3167]: E0913 00:06:20.146333 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.146518 kubelet[3167]: W0913 00:06:20.146346 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.146645 kubelet[3167]: E0913 00:06:20.146519 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.147066 kubelet[3167]: E0913 00:06:20.146833 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.147066 kubelet[3167]: W0913 00:06:20.146847 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.147066 kubelet[3167]: E0913 00:06:20.146931 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.147522 containerd[1977]: time="2025-09-13T00:06:20.147364706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5465d5c94d-pf9lm,Uid:719ee819-3bf6-485c-9ac6-cd6c7f870f67,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e86bdc01f0e855287a040378934e3da09373e269213fc9f316c225354ba9afc\"" Sep 13 00:06:20.147606 kubelet[3167]: E0913 00:06:20.147427 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.147606 kubelet[3167]: W0913 00:06:20.147438 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.147606 kubelet[3167]: E0913 00:06:20.147472 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.148134 kubelet[3167]: E0913 00:06:20.147957 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.148134 kubelet[3167]: W0913 00:06:20.147972 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.148134 kubelet[3167]: E0913 00:06:20.148054 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.148284 containerd[1977]: time="2025-09-13T00:06:20.148050917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mktlg,Uid:7278499b-fe4a-4b7d-9826-5d927d46f420,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:20.148756 kubelet[3167]: E0913 00:06:20.148528 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.148756 kubelet[3167]: W0913 00:06:20.148541 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.148756 kubelet[3167]: E0913 00:06:20.148560 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.149098 kubelet[3167]: E0913 00:06:20.148950 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.149098 kubelet[3167]: W0913 00:06:20.148962 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.149098 kubelet[3167]: E0913 00:06:20.148990 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.149535 kubelet[3167]: E0913 00:06:20.149389 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.149535 kubelet[3167]: W0913 00:06:20.149402 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.149535 kubelet[3167]: E0913 00:06:20.149440 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.149998 kubelet[3167]: E0913 00:06:20.149809 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.149998 kubelet[3167]: W0913 00:06:20.149822 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.149998 kubelet[3167]: E0913 00:06:20.149834 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.150302 kubelet[3167]: E0913 00:06:20.150185 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.150302 kubelet[3167]: W0913 00:06:20.150199 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.150302 kubelet[3167]: E0913 00:06:20.150213 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.150730 kubelet[3167]: E0913 00:06:20.150604 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.150730 kubelet[3167]: W0913 00:06:20.150617 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.150730 kubelet[3167]: E0913 00:06:20.150631 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.151201 kubelet[3167]: E0913 00:06:20.151003 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.151201 kubelet[3167]: W0913 00:06:20.151015 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.151201 kubelet[3167]: E0913 00:06:20.151029 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.151516 kubelet[3167]: E0913 00:06:20.151385 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.151516 kubelet[3167]: W0913 00:06:20.151398 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.151516 kubelet[3167]: E0913 00:06:20.151432 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.151910 kubelet[3167]: E0913 00:06:20.151795 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.151910 kubelet[3167]: W0913 00:06:20.151810 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.151910 kubelet[3167]: E0913 00:06:20.151823 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.152383 kubelet[3167]: E0913 00:06:20.152175 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.152383 kubelet[3167]: W0913 00:06:20.152188 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.152383 kubelet[3167]: E0913 00:06:20.152203 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.153948 containerd[1977]: time="2025-09-13T00:06:20.153805883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:06:20.225647 containerd[1977]: time="2025-09-13T00:06:20.222228659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:20.225647 containerd[1977]: time="2025-09-13T00:06:20.222338287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:20.225647 containerd[1977]: time="2025-09-13T00:06:20.222366542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:20.225647 containerd[1977]: time="2025-09-13T00:06:20.223503673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:20.247689 kubelet[3167]: E0913 00:06:20.247495 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.247689 kubelet[3167]: W0913 00:06:20.247530 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.247689 kubelet[3167]: E0913 00:06:20.247562 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.247689 kubelet[3167]: I0913 00:06:20.247599 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rn5\" (UniqueName: \"kubernetes.io/projected/85496fec-fe0d-4f47-8c0f-4732dc0de194-kube-api-access-v4rn5\") pod \"csi-node-driver-hn4jz\" (UID: \"85496fec-fe0d-4f47-8c0f-4732dc0de194\") " pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:20.249875 kubelet[3167]: E0913 00:06:20.249516 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.249875 kubelet[3167]: W0913 00:06:20.249593 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.249875 kubelet[3167]: E0913 00:06:20.249639 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.250320 kubelet[3167]: E0913 00:06:20.250155 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.250320 kubelet[3167]: W0913 00:06:20.250171 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.250320 kubelet[3167]: E0913 00:06:20.250190 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.252157 kubelet[3167]: E0913 00:06:20.251302 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.252157 kubelet[3167]: W0913 00:06:20.251318 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.252157 kubelet[3167]: E0913 00:06:20.251349 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.254114 kubelet[3167]: E0913 00:06:20.253979 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.254114 kubelet[3167]: W0913 00:06:20.253994 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.254114 kubelet[3167]: E0913 00:06:20.254015 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.255993 kubelet[3167]: E0913 00:06:20.254888 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.255993 kubelet[3167]: W0913 00:06:20.254904 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.255993 kubelet[3167]: E0913 00:06:20.254936 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.257516 kubelet[3167]: E0913 00:06:20.257498 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.257784 kubelet[3167]: W0913 00:06:20.257604 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.257947 kubelet[3167]: E0913 00:06:20.257888 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.259430 kubelet[3167]: E0913 00:06:20.258129 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.259430 kubelet[3167]: W0913 00:06:20.258143 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.259430 kubelet[3167]: E0913 00:06:20.258164 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.259430 kubelet[3167]: I0913 00:06:20.258196 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85496fec-fe0d-4f47-8c0f-4732dc0de194-varrun\") pod \"csi-node-driver-hn4jz\" (UID: \"85496fec-fe0d-4f47-8c0f-4732dc0de194\") " pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:20.260706 kubelet[3167]: E0913 00:06:20.260102 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.260706 kubelet[3167]: W0913 00:06:20.260119 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.260706 kubelet[3167]: E0913 00:06:20.260332 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.264053 kubelet[3167]: E0913 00:06:20.263821 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.264053 kubelet[3167]: W0913 00:06:20.263838 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.264053 kubelet[3167]: E0913 00:06:20.263898 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.264214 kubelet[3167]: E0913 00:06:20.264155 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.264214 kubelet[3167]: W0913 00:06:20.264167 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.264597 kubelet[3167]: E0913 00:06:20.264573 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.265123 kubelet[3167]: E0913 00:06:20.264991 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.265123 kubelet[3167]: W0913 00:06:20.265005 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.265123 kubelet[3167]: E0913 00:06:20.265088 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.265123 kubelet[3167]: I0913 00:06:20.265119 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85496fec-fe0d-4f47-8c0f-4732dc0de194-socket-dir\") pod \"csi-node-driver-hn4jz\" (UID: \"85496fec-fe0d-4f47-8c0f-4732dc0de194\") " pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:20.265327 kubelet[3167]: E0913 00:06:20.265321 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.265377 kubelet[3167]: W0913 00:06:20.265332 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.265446 kubelet[3167]: E0913 00:06:20.265391 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.265594 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.267231 kubelet[3167]: W0913 00:06:20.265631 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.265645 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.265958 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.267231 kubelet[3167]: W0913 00:06:20.265968 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.265984 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.266210 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.267231 kubelet[3167]: W0913 00:06:20.266219 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.266241 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.267231 kubelet[3167]: E0913 00:06:20.266559 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.267568 kubelet[3167]: W0913 00:06:20.266571 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.267568 kubelet[3167]: E0913 00:06:20.266600 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.267568 kubelet[3167]: E0913 00:06:20.267041 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.267568 kubelet[3167]: W0913 00:06:20.267053 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.267568 kubelet[3167]: E0913 00:06:20.267131 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.272805 kubelet[3167]: E0913 00:06:20.271498 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.272805 kubelet[3167]: W0913 00:06:20.271512 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.272805 kubelet[3167]: E0913 00:06:20.271524 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.274641 systemd[1]: Started cri-containerd-241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9.scope - libcontainer container 241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9. Sep 13 00:06:20.325293 containerd[1977]: time="2025-09-13T00:06:20.324903518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mktlg,Uid:7278499b-fe4a-4b7d-9826-5d927d46f420,Namespace:calico-system,Attempt:0,} returns sandbox id \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\"" Sep 13 00:06:20.367391 kubelet[3167]: E0913 00:06:20.367355 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.367391 kubelet[3167]: W0913 00:06:20.367380 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.367606 kubelet[3167]: E0913 00:06:20.367416 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.367874 kubelet[3167]: E0913 00:06:20.367847 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.367874 kubelet[3167]: W0913 00:06:20.367869 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.368004 kubelet[3167]: E0913 00:06:20.367918 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.368665 kubelet[3167]: E0913 00:06:20.368638 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.368665 kubelet[3167]: W0913 00:06:20.368656 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.368796 kubelet[3167]: E0913 00:06:20.368694 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.370750 kubelet[3167]: E0913 00:06:20.370729 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.370750 kubelet[3167]: W0913 00:06:20.370747 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.370894 kubelet[3167]: E0913 00:06:20.370778 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.371129 kubelet[3167]: E0913 00:06:20.371085 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.371129 kubelet[3167]: W0913 00:06:20.371097 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.371339 kubelet[3167]: E0913 00:06:20.371181 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.371506 kubelet[3167]: E0913 00:06:20.371490 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.371587 kubelet[3167]: W0913 00:06:20.371508 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.371741 kubelet[3167]: E0913 00:06:20.371615 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.371908 kubelet[3167]: E0913 00:06:20.371894 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.371986 kubelet[3167]: W0913 00:06:20.371910 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.372172 kubelet[3167]: E0913 00:06:20.372059 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.372247 kubelet[3167]: E0913 00:06:20.372177 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.372247 kubelet[3167]: W0913 00:06:20.372187 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.372247 kubelet[3167]: E0913 00:06:20.372206 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.372612 kubelet[3167]: E0913 00:06:20.372508 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.372612 kubelet[3167]: W0913 00:06:20.372519 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.372612 kubelet[3167]: E0913 00:06:20.372544 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.372979 kubelet[3167]: E0913 00:06:20.372910 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.372979 kubelet[3167]: W0913 00:06:20.372922 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.372979 kubelet[3167]: E0913 00:06:20.372954 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.373250 kubelet[3167]: E0913 00:06:20.373227 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.373456 kubelet[3167]: W0913 00:06:20.373345 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.373456 kubelet[3167]: E0913 00:06:20.373373 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.373992 kubelet[3167]: E0913 00:06:20.373906 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.373992 kubelet[3167]: W0913 00:06:20.373939 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.374219 kubelet[3167]: E0913 00:06:20.374108 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.375353 kubelet[3167]: E0913 00:06:20.375327 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.375353 kubelet[3167]: W0913 00:06:20.375345 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.375529 kubelet[3167]: E0913 00:06:20.375361 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.375779 kubelet[3167]: E0913 00:06:20.375761 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.375779 kubelet[3167]: W0913 00:06:20.375775 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.375892 kubelet[3167]: E0913 00:06:20.375790 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.378747 kubelet[3167]: E0913 00:06:20.378724 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.378747 kubelet[3167]: W0913 00:06:20.378746 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.378862 kubelet[3167]: E0913 00:06:20.378764 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:20.386983 kubelet[3167]: E0913 00:06:20.386946 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:20.386983 kubelet[3167]: W0913 00:06:20.386975 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:20.387150 kubelet[3167]: E0913 00:06:20.386999 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:21.622592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439527002.mount: Deactivated successfully. Sep 13 00:06:22.342665 kubelet[3167]: E0913 00:06:22.342620 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:23.493236 containerd[1977]: time="2025-09-13T00:06:23.493182479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.495030 containerd[1977]: time="2025-09-13T00:06:23.494539984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:06:23.497755 containerd[1977]: time="2025-09-13T00:06:23.495962400Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.498741 containerd[1977]: time="2025-09-13T00:06:23.498701054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.499608 containerd[1977]: time="2025-09-13T00:06:23.499574304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.345724254s" Sep 13 00:06:23.499749 containerd[1977]: time="2025-09-13T00:06:23.499726568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:06:23.501073 containerd[1977]: time="2025-09-13T00:06:23.501040964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:06:23.522886 containerd[1977]: time="2025-09-13T00:06:23.522845786Z" level=info msg="CreateContainer within sandbox \"0e86bdc01f0e855287a040378934e3da09373e269213fc9f316c225354ba9afc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:06:23.555978 containerd[1977]: time="2025-09-13T00:06:23.555335263Z" level=info msg="CreateContainer within sandbox \"0e86bdc01f0e855287a040378934e3da09373e269213fc9f316c225354ba9afc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"005ede906e7f899428c619040495bf6353ed0b2e54b7ac8ce4857d36bf27b30e\"" Sep 13 00:06:23.559351 containerd[1977]: time="2025-09-13T00:06:23.558287814Z" level=info msg="StartContainer for \"005ede906e7f899428c619040495bf6353ed0b2e54b7ac8ce4857d36bf27b30e\"" Sep 13 00:06:23.608739 systemd[1]: Started cri-containerd-005ede906e7f899428c619040495bf6353ed0b2e54b7ac8ce4857d36bf27b30e.scope - libcontainer container 005ede906e7f899428c619040495bf6353ed0b2e54b7ac8ce4857d36bf27b30e. Sep 13 00:06:23.661305 containerd[1977]: time="2025-09-13T00:06:23.661270667Z" level=info msg="StartContainer for \"005ede906e7f899428c619040495bf6353ed0b2e54b7ac8ce4857d36bf27b30e\" returns successfully" Sep 13 00:06:24.343573 kubelet[3167]: E0913 00:06:24.342695 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:24.604143 kubelet[3167]: I0913 00:06:24.603363 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5465d5c94d-pf9lm" podStartSLOduration=2.255953507 podStartE2EDuration="5.603349439s" podCreationTimestamp="2025-09-13 00:06:19 +0000 UTC" firstStartedPulling="2025-09-13 00:06:20.153497564 +0000 UTC m=+22.992467599" lastFinishedPulling="2025-09-13 00:06:23.500893508 +0000 UTC m=+26.339863531" observedRunningTime="2025-09-13 00:06:24.603078221 +0000 UTC m=+27.442048260" watchObservedRunningTime="2025-09-13 00:06:24.603349439 +0000 UTC m=+27.442319480" Sep 13 00:06:24.686511 kubelet[3167]: E0913 00:06:24.686468 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.686965 kubelet[3167]: W0913 00:06:24.686670 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.686965 kubelet[3167]: E0913 00:06:24.686704 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.689881 kubelet[3167]: E0913 00:06:24.689601 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.690204 kubelet[3167]: W0913 00:06:24.690025 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.690204 kubelet[3167]: E0913 00:06:24.690061 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.690582 kubelet[3167]: E0913 00:06:24.690557 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.690665 kubelet[3167]: W0913 00:06:24.690582 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.690665 kubelet[3167]: E0913 00:06:24.690602 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.691045 kubelet[3167]: E0913 00:06:24.691020 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.691143 kubelet[3167]: W0913 00:06:24.691119 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.691143 kubelet[3167]: E0913 00:06:24.691138 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.691774 kubelet[3167]: E0913 00:06:24.691754 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.691774 kubelet[3167]: W0913 00:06:24.691769 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.692203 kubelet[3167]: E0913 00:06:24.691786 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.692653 kubelet[3167]: E0913 00:06:24.692437 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.692653 kubelet[3167]: W0913 00:06:24.692453 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.692653 kubelet[3167]: E0913 00:06:24.692554 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.693057 kubelet[3167]: E0913 00:06:24.693043 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.693503 kubelet[3167]: W0913 00:06:24.693151 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.693503 kubelet[3167]: E0913 00:06:24.693171 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.693702 kubelet[3167]: E0913 00:06:24.693690 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.693893 kubelet[3167]: W0913 00:06:24.693767 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.693893 kubelet[3167]: E0913 00:06:24.693784 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.694160 kubelet[3167]: E0913 00:06:24.694146 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.694376 kubelet[3167]: W0913 00:06:24.694285 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.694376 kubelet[3167]: E0913 00:06:24.694327 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.694998 kubelet[3167]: E0913 00:06:24.694820 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.694998 kubelet[3167]: W0913 00:06:24.694835 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.694998 kubelet[3167]: E0913 00:06:24.694850 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.695532 kubelet[3167]: E0913 00:06:24.695368 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.695532 kubelet[3167]: W0913 00:06:24.695383 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.695532 kubelet[3167]: E0913 00:06:24.695398 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.695962 kubelet[3167]: E0913 00:06:24.695935 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.695962 kubelet[3167]: W0913 00:06:24.695950 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.696077 kubelet[3167]: E0913 00:06:24.695965 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.696261 kubelet[3167]: E0913 00:06:24.696238 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.696261 kubelet[3167]: W0913 00:06:24.696256 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.696722 kubelet[3167]: E0913 00:06:24.696272 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.697079 kubelet[3167]: E0913 00:06:24.697061 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.697147 kubelet[3167]: W0913 00:06:24.697080 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.697147 kubelet[3167]: E0913 00:06:24.697095 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.697951 kubelet[3167]: E0913 00:06:24.697644 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.697951 kubelet[3167]: W0913 00:06:24.697658 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.697951 kubelet[3167]: E0913 00:06:24.697673 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.700129 kubelet[3167]: E0913 00:06:24.700110 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.700129 kubelet[3167]: W0913 00:06:24.700126 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.700340 kubelet[3167]: E0913 00:06:24.700141 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.700925 kubelet[3167]: E0913 00:06:24.700768 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.700925 kubelet[3167]: W0913 00:06:24.700783 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.701137 kubelet[3167]: E0913 00:06:24.700812 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.701506 kubelet[3167]: E0913 00:06:24.701349 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.701506 kubelet[3167]: W0913 00:06:24.701365 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.701506 kubelet[3167]: E0913 00:06:24.701385 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.702181 kubelet[3167]: E0913 00:06:24.701891 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.702181 kubelet[3167]: W0913 00:06:24.701906 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.702181 kubelet[3167]: E0913 00:06:24.701942 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.702594 kubelet[3167]: E0913 00:06:24.702400 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.702594 kubelet[3167]: W0913 00:06:24.702434 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.702594 kubelet[3167]: E0913 00:06:24.702454 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.703582 kubelet[3167]: E0913 00:06:24.702982 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.703582 kubelet[3167]: W0913 00:06:24.703006 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.703582 kubelet[3167]: E0913 00:06:24.703126 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.704033 kubelet[3167]: E0913 00:06:24.704020 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.704165 kubelet[3167]: W0913 00:06:24.704138 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.704400 kubelet[3167]: E0913 00:06:24.704325 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.704975 kubelet[3167]: E0913 00:06:24.704879 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.704975 kubelet[3167]: W0913 00:06:24.704894 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.705474 kubelet[3167]: E0913 00:06:24.705288 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.705647 kubelet[3167]: E0913 00:06:24.705550 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.705647 kubelet[3167]: W0913 00:06:24.705567 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.705957 kubelet[3167]: E0913 00:06:24.705817 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.706501 kubelet[3167]: E0913 00:06:24.706481 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.706501 kubelet[3167]: W0913 00:06:24.706498 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.706637 kubelet[3167]: E0913 00:06:24.706595 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.706784 kubelet[3167]: E0913 00:06:24.706762 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.706784 kubelet[3167]: W0913 00:06:24.706775 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.706945 kubelet[3167]: E0913 00:06:24.706793 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.707177 kubelet[3167]: E0913 00:06:24.707160 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.707266 kubelet[3167]: W0913 00:06:24.707193 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.707266 kubelet[3167]: E0913 00:06:24.707213 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.708075 kubelet[3167]: E0913 00:06:24.707726 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.708075 kubelet[3167]: W0913 00:06:24.707741 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.708075 kubelet[3167]: E0913 00:06:24.707766 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.708592 kubelet[3167]: E0913 00:06:24.708578 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.708722 kubelet[3167]: W0913 00:06:24.708708 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.709067 kubelet[3167]: E0913 00:06:24.708884 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.709316 kubelet[3167]: E0913 00:06:24.709189 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.709316 kubelet[3167]: W0913 00:06:24.709200 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.709316 kubelet[3167]: E0913 00:06:24.709223 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.709785 kubelet[3167]: E0913 00:06:24.709754 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.710004 kubelet[3167]: W0913 00:06:24.709885 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.710004 kubelet[3167]: E0913 00:06:24.709905 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.710621 kubelet[3167]: E0913 00:06:24.710350 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.710621 kubelet[3167]: W0913 00:06:24.710363 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.710621 kubelet[3167]: E0913 00:06:24.710378 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.711132 kubelet[3167]: E0913 00:06:24.711052 3167 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:06:24.711132 kubelet[3167]: W0913 00:06:24.711066 3167 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:06:24.711132 kubelet[3167]: E0913 00:06:24.711081 3167 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:06:24.774460 containerd[1977]: time="2025-09-13T00:06:24.774346893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:24.775726 containerd[1977]: time="2025-09-13T00:06:24.775666695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:06:24.776981 containerd[1977]: time="2025-09-13T00:06:24.776938716Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:24.782014 containerd[1977]: time="2025-09-13T00:06:24.781133547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:24.782014 containerd[1977]: time="2025-09-13T00:06:24.781865269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.280780133s" Sep 13 00:06:24.782014 containerd[1977]: time="2025-09-13T00:06:24.781908245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:06:24.785765 containerd[1977]: time="2025-09-13T00:06:24.785726573Z" level=info msg="CreateContainer within sandbox \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:06:24.809506 containerd[1977]: time="2025-09-13T00:06:24.809448862Z" level=info msg="CreateContainer within sandbox \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858\"" Sep 13 00:06:24.811401 containerd[1977]: time="2025-09-13T00:06:24.810059351Z" level=info msg="StartContainer for \"16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858\"" Sep 13 00:06:24.870909 systemd[1]: run-containerd-runc-k8s.io-16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858-runc.1069Q5.mount: Deactivated successfully. Sep 13 00:06:24.886658 systemd[1]: Started cri-containerd-16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858.scope - libcontainer container 16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858. Sep 13 00:06:24.927393 containerd[1977]: time="2025-09-13T00:06:24.927353204Z" level=info msg="StartContainer for \"16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858\" returns successfully" Sep 13 00:06:24.940217 systemd[1]: cri-containerd-16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858.scope: Deactivated successfully. Sep 13 00:06:25.133316 containerd[1977]: time="2025-09-13T00:06:25.107132600Z" level=info msg="shim disconnected" id=16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858 namespace=k8s.io Sep 13 00:06:25.133316 containerd[1977]: time="2025-09-13T00:06:25.133231147Z" level=warning msg="cleaning up after shim disconnected" id=16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858 namespace=k8s.io Sep 13 00:06:25.133316 containerd[1977]: time="2025-09-13T00:06:25.133250791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:25.592827 kubelet[3167]: I0913 00:06:25.592768 3167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:06:25.594312 containerd[1977]: time="2025-09-13T00:06:25.593953194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:06:25.799503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16f9563c2079e7a532e57be6b1a7f25c3bb6b90bbb1a150a17d0fb8e11dee858-rootfs.mount: Deactivated successfully. Sep 13 00:06:26.343067 kubelet[3167]: E0913 00:06:26.343004 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:28.343866 kubelet[3167]: E0913 00:06:28.343243 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:29.544201 containerd[1977]: time="2025-09-13T00:06:29.544136467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:29.546111 containerd[1977]: time="2025-09-13T00:06:29.545941550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:06:29.549451 containerd[1977]: time="2025-09-13T00:06:29.548152840Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:29.552117 containerd[1977]: time="2025-09-13T00:06:29.551773684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:29.553158 containerd[1977]: time="2025-09-13T00:06:29.553115466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.959034221s" Sep 13 00:06:29.553158 containerd[1977]: time="2025-09-13T00:06:29.553154687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:06:29.555757 containerd[1977]: time="2025-09-13T00:06:29.555720686Z" level=info msg="CreateContainer within sandbox \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:06:29.604892 containerd[1977]: time="2025-09-13T00:06:29.604848364Z" level=info msg="CreateContainer within sandbox \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af\"" Sep 13 00:06:29.606469 containerd[1977]: time="2025-09-13T00:06:29.605376957Z" level=info msg="StartContainer for \"079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af\"" Sep 13 00:06:29.641615 systemd[1]: Started cri-containerd-079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af.scope - libcontainer container 079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af. Sep 13 00:06:29.676757 containerd[1977]: time="2025-09-13T00:06:29.676576560Z" level=info msg="StartContainer for \"079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af\" returns successfully" Sep 13 00:06:30.342894 kubelet[3167]: E0913 00:06:30.342847 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:30.407562 systemd[1]: cri-containerd-079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af.scope: Deactivated successfully. Sep 13 00:06:30.457911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af-rootfs.mount: Deactivated successfully. Sep 13 00:06:30.466765 kubelet[3167]: I0913 00:06:30.464484 3167 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:06:30.551913 systemd[1]: Created slice kubepods-besteffort-pod82335de8_dedd_4596_af21_5f90659e12f3.slice - libcontainer container kubepods-besteffort-pod82335de8_dedd_4596_af21_5f90659e12f3.slice. Sep 13 00:06:30.562748 kubelet[3167]: I0913 00:06:30.562713 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82335de8-dedd-4596-af21-5f90659e12f3-whisker-ca-bundle\") pod \"whisker-6f5dc47898-hfqln\" (UID: \"82335de8-dedd-4596-af21-5f90659e12f3\") " pod="calico-system/whisker-6f5dc47898-hfqln" Sep 13 00:06:30.563000 kubelet[3167]: I0913 00:06:30.562984 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82335de8-dedd-4596-af21-5f90659e12f3-whisker-backend-key-pair\") pod \"whisker-6f5dc47898-hfqln\" (UID: \"82335de8-dedd-4596-af21-5f90659e12f3\") " pod="calico-system/whisker-6f5dc47898-hfqln" Sep 13 00:06:30.563113 kubelet[3167]: I0913 00:06:30.563074 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvj7\" (UniqueName: \"kubernetes.io/projected/82335de8-dedd-4596-af21-5f90659e12f3-kube-api-access-fxvj7\") pod \"whisker-6f5dc47898-hfqln\" (UID: \"82335de8-dedd-4596-af21-5f90659e12f3\") " pod="calico-system/whisker-6f5dc47898-hfqln" Sep 13 00:06:30.592690 systemd[1]: Created slice kubepods-besteffort-podc459cf7c_3ec3_4406_97f9_a4a7ecf9e253.slice - libcontainer container kubepods-besteffort-podc459cf7c_3ec3_4406_97f9_a4a7ecf9e253.slice. Sep 13 00:06:30.600303 systemd[1]: Created slice kubepods-besteffort-pod3ecbb697_c2d6_45f9_b896_7067a982618c.slice - libcontainer container kubepods-besteffort-pod3ecbb697_c2d6_45f9_b896_7067a982618c.slice. Sep 13 00:06:30.613210 systemd[1]: Created slice kubepods-besteffort-podcf5b23e2_56dc_49fe_9c10_0589cb6cd48a.slice - libcontainer container kubepods-besteffort-podcf5b23e2_56dc_49fe_9c10_0589cb6cd48a.slice. Sep 13 00:06:30.620923 systemd[1]: Created slice kubepods-burstable-pod41318bb3_412c_4e63_8cef_3f95f4d6ba6e.slice - libcontainer container kubepods-burstable-pod41318bb3_412c_4e63_8cef_3f95f4d6ba6e.slice. Sep 13 00:06:30.631368 systemd[1]: Created slice kubepods-burstable-podac2a539b_0e55_4a98_8350_b301bacb6e1b.slice - libcontainer container kubepods-burstable-podac2a539b_0e55_4a98_8350_b301bacb6e1b.slice. Sep 13 00:06:30.637154 kubelet[3167]: W0913 00:06:30.637103 3167 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-17-100" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-100' and this object Sep 13 00:06:30.640340 systemd[1]: Created slice kubepods-besteffort-pod54ae31d6_2edb_4026_9279_0fba6bb34176.slice - libcontainer container kubepods-besteffort-pod54ae31d6_2edb_4026_9279_0fba6bb34176.slice. Sep 13 00:06:30.665892 kubelet[3167]: I0913 00:06:30.665850 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghtq\" (UniqueName: \"kubernetes.io/projected/cf5b23e2-56dc-49fe-9c10-0589cb6cd48a-kube-api-access-hghtq\") pod \"calico-apiserver-56596dc84c-nl9sx\" (UID: \"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a\") " pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" Sep 13 00:06:30.666057 kubelet[3167]: I0913 00:06:30.665928 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41318bb3-412c-4e63-8cef-3f95f4d6ba6e-config-volume\") pod \"coredns-7c65d6cfc9-6hgsg\" (UID: \"41318bb3-412c-4e63-8cef-3f95f4d6ba6e\") " pod="kube-system/coredns-7c65d6cfc9-6hgsg" Sep 13 00:06:30.666057 kubelet[3167]: I0913 00:06:30.665946 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2j2\" (UniqueName: \"kubernetes.io/projected/41318bb3-412c-4e63-8cef-3f95f4d6ba6e-kube-api-access-5l2j2\") pod \"coredns-7c65d6cfc9-6hgsg\" (UID: \"41318bb3-412c-4e63-8cef-3f95f4d6ba6e\") " pod="kube-system/coredns-7c65d6cfc9-6hgsg" Sep 13 00:06:30.666057 kubelet[3167]: I0913 00:06:30.665963 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlxk7\" (UniqueName: \"kubernetes.io/projected/c459cf7c-3ec3-4406-97f9-a4a7ecf9e253-kube-api-access-jlxk7\") pod \"calico-kube-controllers-58df989bd8-4mk2j\" (UID: \"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253\") " pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" Sep 13 00:06:30.666057 kubelet[3167]: I0913 00:06:30.665980 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c459cf7c-3ec3-4406-97f9-a4a7ecf9e253-tigera-ca-bundle\") pod \"calico-kube-controllers-58df989bd8-4mk2j\" (UID: \"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253\") " pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" Sep 13 00:06:30.666057 kubelet[3167]: I0913 00:06:30.666019 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ecbb697-c2d6-45f9-b896-7067a982618c-goldmane-ca-bundle\") pod \"goldmane-7988f88666-z9zb5\" (UID: \"3ecbb697-c2d6-45f9-b896-7067a982618c\") " pod="calico-system/goldmane-7988f88666-z9zb5" Sep 13 00:06:30.666317 kubelet[3167]: I0913 00:06:30.666037 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac2a539b-0e55-4a98-8350-b301bacb6e1b-config-volume\") pod \"coredns-7c65d6cfc9-lbdr4\" (UID: \"ac2a539b-0e55-4a98-8350-b301bacb6e1b\") " pod="kube-system/coredns-7c65d6cfc9-lbdr4" Sep 13 00:06:30.666317 kubelet[3167]: I0913 00:06:30.666057 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chgjb\" (UniqueName: \"kubernetes.io/projected/3ecbb697-c2d6-45f9-b896-7067a982618c-kube-api-access-chgjb\") pod \"goldmane-7988f88666-z9zb5\" (UID: \"3ecbb697-c2d6-45f9-b896-7067a982618c\") " pod="calico-system/goldmane-7988f88666-z9zb5" Sep 13 00:06:30.666317 kubelet[3167]: I0913 00:06:30.666073 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkmw\" (UniqueName: \"kubernetes.io/projected/54ae31d6-2edb-4026-9279-0fba6bb34176-kube-api-access-6wkmw\") pod \"calico-apiserver-56596dc84c-7sb8x\" (UID: \"54ae31d6-2edb-4026-9279-0fba6bb34176\") " pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" Sep 13 00:06:30.666317 kubelet[3167]: I0913 00:06:30.666111 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf5b23e2-56dc-49fe-9c10-0589cb6cd48a-calico-apiserver-certs\") pod \"calico-apiserver-56596dc84c-nl9sx\" (UID: \"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a\") " pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" Sep 13 00:06:30.666317 kubelet[3167]: I0913 00:06:30.666127 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ecbb697-c2d6-45f9-b896-7067a982618c-config\") pod \"goldmane-7988f88666-z9zb5\" (UID: \"3ecbb697-c2d6-45f9-b896-7067a982618c\") " pod="calico-system/goldmane-7988f88666-z9zb5" Sep 13 00:06:30.666474 kubelet[3167]: I0913 00:06:30.666144 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3ecbb697-c2d6-45f9-b896-7067a982618c-goldmane-key-pair\") pod \"goldmane-7988f88666-z9zb5\" (UID: \"3ecbb697-c2d6-45f9-b896-7067a982618c\") " pod="calico-system/goldmane-7988f88666-z9zb5" Sep 13 00:06:30.666474 kubelet[3167]: I0913 00:06:30.666173 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtk6\" (UniqueName: \"kubernetes.io/projected/ac2a539b-0e55-4a98-8350-b301bacb6e1b-kube-api-access-sjtk6\") pod \"coredns-7c65d6cfc9-lbdr4\" (UID: \"ac2a539b-0e55-4a98-8350-b301bacb6e1b\") " pod="kube-system/coredns-7c65d6cfc9-lbdr4" Sep 13 00:06:30.666474 kubelet[3167]: I0913 00:06:30.666202 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54ae31d6-2edb-4026-9279-0fba6bb34176-calico-apiserver-certs\") pod \"calico-apiserver-56596dc84c-7sb8x\" (UID: \"54ae31d6-2edb-4026-9279-0fba6bb34176\") " pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" Sep 13 00:06:30.672882 kubelet[3167]: E0913 00:06:30.672833 3167 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-17-100\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-100' and this object" logger="UnhandledError" Sep 13 00:06:30.698255 containerd[1977]: time="2025-09-13T00:06:30.697283248Z" level=info msg="shim disconnected" id=079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af namespace=k8s.io Sep 13 00:06:30.698255 containerd[1977]: time="2025-09-13T00:06:30.697861759Z" level=warning msg="cleaning up after shim disconnected" id=079563de19fdea8adf31ecf14551d4a26ded35df3c269ad3d5dbe7ce52cf96af namespace=k8s.io Sep 13 00:06:30.698255 containerd[1977]: time="2025-09-13T00:06:30.697883762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:06:30.862011 containerd[1977]: time="2025-09-13T00:06:30.860970860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5dc47898-hfqln,Uid:82335de8-dedd-4596-af21-5f90659e12f3,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:30.897637 containerd[1977]: time="2025-09-13T00:06:30.897590486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58df989bd8-4mk2j,Uid:c459cf7c-3ec3-4406-97f9-a4a7ecf9e253,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:30.934622 containerd[1977]: time="2025-09-13T00:06:30.934579418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6hgsg,Uid:41318bb3-412c-4e63-8cef-3f95f4d6ba6e,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:30.935153 containerd[1977]: time="2025-09-13T00:06:30.934983967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-nl9sx,Uid:cf5b23e2-56dc-49fe-9c10-0589cb6cd48a,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:06:30.947829 containerd[1977]: time="2025-09-13T00:06:30.947715184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-7sb8x,Uid:54ae31d6-2edb-4026-9279-0fba6bb34176,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:06:30.949454 containerd[1977]: time="2025-09-13T00:06:30.948214613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lbdr4,Uid:ac2a539b-0e55-4a98-8350-b301bacb6e1b,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:31.435343 containerd[1977]: time="2025-09-13T00:06:31.435280058Z" level=error msg="Failed to destroy network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.435954 containerd[1977]: time="2025-09-13T00:06:31.435915518Z" level=error msg="encountered an error cleaning up failed sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.436126 containerd[1977]: time="2025-09-13T00:06:31.436097876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lbdr4,Uid:ac2a539b-0e55-4a98-8350-b301bacb6e1b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.436484 containerd[1977]: time="2025-09-13T00:06:31.436445411Z" level=error msg="Failed to destroy network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.436901 containerd[1977]: time="2025-09-13T00:06:31.436869331Z" level=error msg="encountered an error cleaning up failed sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.437048 containerd[1977]: time="2025-09-13T00:06:31.437021209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f5dc47898-hfqln,Uid:82335de8-dedd-4596-af21-5f90659e12f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.451121 containerd[1977]: time="2025-09-13T00:06:31.451077361Z" level=error msg="Failed to destroy network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.451270 kubelet[3167]: E0913 00:06:31.451153 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.451270 kubelet[3167]: E0913 00:06:31.451233 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f5dc47898-hfqln" Sep 13 00:06:31.451270 kubelet[3167]: E0913 00:06:31.451262 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f5dc47898-hfqln" Sep 13 00:06:31.451803 kubelet[3167]: E0913 00:06:31.451314 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f5dc47898-hfqln_calico-system(82335de8-dedd-4596-af21-5f90659e12f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f5dc47898-hfqln_calico-system(82335de8-dedd-4596-af21-5f90659e12f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f5dc47898-hfqln" podUID="82335de8-dedd-4596-af21-5f90659e12f3" Sep 13 00:06:31.453385 kubelet[3167]: E0913 00:06:31.452338 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.453385 kubelet[3167]: E0913 00:06:31.452398 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lbdr4" Sep 13 00:06:31.453385 kubelet[3167]: E0913 00:06:31.452548 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lbdr4" Sep 13 00:06:31.453617 kubelet[3167]: E0913 00:06:31.452613 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lbdr4_kube-system(ac2a539b-0e55-4a98-8350-b301bacb6e1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lbdr4_kube-system(ac2a539b-0e55-4a98-8350-b301bacb6e1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lbdr4" podUID="ac2a539b-0e55-4a98-8350-b301bacb6e1b" Sep 13 00:06:31.455131 containerd[1977]: time="2025-09-13T00:06:31.454690835Z" level=error msg="Failed to destroy network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.455558 containerd[1977]: time="2025-09-13T00:06:31.455522311Z" level=error msg="encountered an error cleaning up failed sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.455736 containerd[1977]: time="2025-09-13T00:06:31.455708011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-nl9sx,Uid:cf5b23e2-56dc-49fe-9c10-0589cb6cd48a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.457562 containerd[1977]: time="2025-09-13T00:06:31.455539675Z" level=error msg="encountered an error cleaning up failed sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.457699 containerd[1977]: time="2025-09-13T00:06:31.457672739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6hgsg,Uid:41318bb3-412c-4e63-8cef-3f95f4d6ba6e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.457819 containerd[1977]: time="2025-09-13T00:06:31.454890628Z" level=error msg="Failed to destroy network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.458348 containerd[1977]: time="2025-09-13T00:06:31.458218676Z" level=error msg="encountered an error cleaning up failed sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.458348 containerd[1977]: time="2025-09-13T00:06:31.458267040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58df989bd8-4mk2j,Uid:c459cf7c-3ec3-4406-97f9-a4a7ecf9e253,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.458581 kubelet[3167]: E0913 00:06:31.458255 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.459425 kubelet[3167]: E0913 00:06:31.458755 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.459425 kubelet[3167]: E0913 00:06:31.458803 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" Sep 13 00:06:31.459425 kubelet[3167]: E0913 00:06:31.458834 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" Sep 13 00:06:31.459603 kubelet[3167]: E0913 00:06:31.458881 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58df989bd8-4mk2j_calico-system(c459cf7c-3ec3-4406-97f9-a4a7ecf9e253)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58df989bd8-4mk2j_calico-system(c459cf7c-3ec3-4406-97f9-a4a7ecf9e253)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" podUID="c459cf7c-3ec3-4406-97f9-a4a7ecf9e253" Sep 13 00:06:31.459603 kubelet[3167]: E0913 00:06:31.459174 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6hgsg" Sep 13 00:06:31.459603 kubelet[3167]: E0913 00:06:31.459213 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6hgsg" Sep 13 00:06:31.459782 kubelet[3167]: E0913 00:06:31.459252 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6hgsg_kube-system(41318bb3-412c-4e63-8cef-3f95f4d6ba6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6hgsg_kube-system(41318bb3-412c-4e63-8cef-3f95f4d6ba6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6hgsg" podUID="41318bb3-412c-4e63-8cef-3f95f4d6ba6e" Sep 13 00:06:31.459782 kubelet[3167]: E0913 00:06:31.459304 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.459782 kubelet[3167]: E0913 00:06:31.459332 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" Sep 13 00:06:31.459933 kubelet[3167]: E0913 00:06:31.459351 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" Sep 13 00:06:31.459933 kubelet[3167]: E0913 00:06:31.459384 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56596dc84c-nl9sx_calico-apiserver(cf5b23e2-56dc-49fe-9c10-0589cb6cd48a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56596dc84c-nl9sx_calico-apiserver(cf5b23e2-56dc-49fe-9c10-0589cb6cd48a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" podUID="cf5b23e2-56dc-49fe-9c10-0589cb6cd48a" Sep 13 00:06:31.463569 containerd[1977]: time="2025-09-13T00:06:31.463109799Z" level=error msg="Failed to destroy network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.463569 containerd[1977]: time="2025-09-13T00:06:31.463456848Z" level=error msg="encountered an error cleaning up failed sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.463569 containerd[1977]: time="2025-09-13T00:06:31.463519831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-7sb8x,Uid:54ae31d6-2edb-4026-9279-0fba6bb34176,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.464878 kubelet[3167]: E0913 00:06:31.464127 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.464878 kubelet[3167]: E0913 00:06:31.464189 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" Sep 13 00:06:31.464878 kubelet[3167]: E0913 00:06:31.464212 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" Sep 13 00:06:31.465132 kubelet[3167]: E0913 00:06:31.464258 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56596dc84c-7sb8x_calico-apiserver(54ae31d6-2edb-4026-9279-0fba6bb34176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56596dc84c-7sb8x_calico-apiserver(54ae31d6-2edb-4026-9279-0fba6bb34176)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" podUID="54ae31d6-2edb-4026-9279-0fba6bb34176" Sep 13 00:06:31.617269 kubelet[3167]: I0913 00:06:31.616559 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:31.623817 containerd[1977]: time="2025-09-13T00:06:31.623681973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:06:31.627451 kubelet[3167]: I0913 00:06:31.627402 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:06:31.667434 kubelet[3167]: I0913 00:06:31.665550 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:31.705618 containerd[1977]: time="2025-09-13T00:06:31.705488453Z" level=info msg="StopPodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\"" Sep 13 00:06:31.706071 kubelet[3167]: I0913 00:06:31.706041 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:06:31.706214 containerd[1977]: time="2025-09-13T00:06:31.705485395Z" level=info msg="StopPodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\"" Sep 13 00:06:31.712193 containerd[1977]: time="2025-09-13T00:06:31.712149548Z" level=info msg="Ensure that sandbox f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222 in task-service has been cleanup successfully" Sep 13 00:06:31.712629 containerd[1977]: time="2025-09-13T00:06:31.712598874Z" level=info msg="StopPodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\"" Sep 13 00:06:31.714284 containerd[1977]: time="2025-09-13T00:06:31.713072669Z" level=info msg="Ensure that sandbox 87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa in task-service has been cleanup successfully" Sep 13 00:06:31.719294 kubelet[3167]: I0913 00:06:31.718755 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:31.722417 containerd[1977]: time="2025-09-13T00:06:31.722376262Z" level=info msg="StopPodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\"" Sep 13 00:06:31.722778 containerd[1977]: time="2025-09-13T00:06:31.712149742Z" level=info msg="Ensure that sandbox dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f in task-service has been cleanup successfully" Sep 13 00:06:31.722895 containerd[1977]: time="2025-09-13T00:06:31.705774939Z" level=info msg="StopPodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\"" Sep 13 00:06:31.723032 containerd[1977]: time="2025-09-13T00:06:31.723011839Z" level=info msg="Ensure that sandbox e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de in task-service has been cleanup successfully" Sep 13 00:06:31.725054 containerd[1977]: time="2025-09-13T00:06:31.725021043Z" level=info msg="Ensure that sandbox ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b in task-service has been cleanup successfully" Sep 13 00:06:31.731164 kubelet[3167]: I0913 00:06:31.731132 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:06:31.733318 containerd[1977]: time="2025-09-13T00:06:31.733283394Z" level=info msg="StopPodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\"" Sep 13 00:06:31.736936 containerd[1977]: time="2025-09-13T00:06:31.736896947Z" level=info msg="Ensure that sandbox 050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915 in task-service has been cleanup successfully" Sep 13 00:06:31.771497 kubelet[3167]: E0913 00:06:31.771452 3167 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:31.771661 kubelet[3167]: E0913 00:06:31.771578 3167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ecbb697-c2d6-45f9-b896-7067a982618c-goldmane-key-pair podName:3ecbb697-c2d6-45f9-b896-7067a982618c nodeName:}" failed. No retries permitted until 2025-09-13 00:06:32.271545959 +0000 UTC m=+35.110515982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/3ecbb697-c2d6-45f9-b896-7067a982618c-goldmane-key-pair") pod "goldmane-7988f88666-z9zb5" (UID: "3ecbb697-c2d6-45f9-b896-7067a982618c") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:06:31.849514 containerd[1977]: time="2025-09-13T00:06:31.849398806Z" level=error msg="StopPodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" failed" error="failed to destroy network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.849879 kubelet[3167]: E0913 00:06:31.849704 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:31.850128 containerd[1977]: time="2025-09-13T00:06:31.849900555Z" level=error msg="StopPodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" failed" error="failed to destroy network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.850296 kubelet[3167]: E0913 00:06:31.850144 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:31.862171 kubelet[3167]: E0913 00:06:31.850181 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b"} Sep 13 00:06:31.862171 kubelet[3167]: E0913 00:06:31.862034 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54ae31d6-2edb-4026-9279-0fba6bb34176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:31.862171 kubelet[3167]: E0913 00:06:31.849777 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de"} Sep 13 00:06:31.862171 kubelet[3167]: E0913 00:06:31.862131 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82335de8-dedd-4596-af21-5f90659e12f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:31.862171 kubelet[3167]: E0913 00:06:31.862162 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82335de8-dedd-4596-af21-5f90659e12f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f5dc47898-hfqln" podUID="82335de8-dedd-4596-af21-5f90659e12f3" Sep 13 00:06:31.862770 kubelet[3167]: E0913 00:06:31.862208 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54ae31d6-2edb-4026-9279-0fba6bb34176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" podUID="54ae31d6-2edb-4026-9279-0fba6bb34176" Sep 13 00:06:31.877626 containerd[1977]: time="2025-09-13T00:06:31.876924658Z" level=error msg="StopPodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" failed" error="failed to destroy network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.877787 kubelet[3167]: E0913 00:06:31.877428 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:06:31.877787 kubelet[3167]: E0913 00:06:31.877493 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa"} Sep 13 00:06:31.877787 kubelet[3167]: E0913 00:06:31.877543 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41318bb3-412c-4e63-8cef-3f95f4d6ba6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:31.877787 kubelet[3167]: E0913 00:06:31.877579 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41318bb3-412c-4e63-8cef-3f95f4d6ba6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6hgsg" podUID="41318bb3-412c-4e63-8cef-3f95f4d6ba6e" Sep 13 00:06:31.885139 containerd[1977]: time="2025-09-13T00:06:31.884138253Z" level=error msg="StopPodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" failed" error="failed to destroy network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.885288 kubelet[3167]: E0913 00:06:31.884563 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:06:31.885288 kubelet[3167]: E0913 00:06:31.884641 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915"} Sep 13 00:06:31.885288 kubelet[3167]: E0913 00:06:31.884702 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:31.885288 kubelet[3167]: E0913 00:06:31.884756 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" podUID="c459cf7c-3ec3-4406-97f9-a4a7ecf9e253" Sep 13 00:06:31.888210 containerd[1977]: time="2025-09-13T00:06:31.888099146Z" level=error msg="StopPodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" failed" error="failed to destroy network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.888761 kubelet[3167]: E0913 00:06:31.888506 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:06:31.888761 kubelet[3167]: E0913 00:06:31.888570 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f"} Sep 13 00:06:31.888761 kubelet[3167]: E0913 00:06:31.888611 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac2a539b-0e55-4a98-8350-b301bacb6e1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:31.888761 kubelet[3167]: E0913 00:06:31.888643 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac2a539b-0e55-4a98-8350-b301bacb6e1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lbdr4" podUID="ac2a539b-0e55-4a98-8350-b301bacb6e1b" Sep 13 00:06:31.889921 containerd[1977]: time="2025-09-13T00:06:31.889881723Z" level=error msg="StopPodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" failed" error="failed to destroy network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:31.890201 kubelet[3167]: E0913 00:06:31.890165 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:31.890300 kubelet[3167]: E0913 00:06:31.890215 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222"} Sep 13 00:06:31.890300 kubelet[3167]: E0913 00:06:31.890255 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:31.890488 kubelet[3167]: E0913 00:06:31.890287 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" podUID="cf5b23e2-56dc-49fe-9c10-0589cb6cd48a" Sep 13 00:06:32.347692 systemd[1]: Created slice kubepods-besteffort-pod85496fec_fe0d_4f47_8c0f_4732dc0de194.slice - libcontainer container kubepods-besteffort-pod85496fec_fe0d_4f47_8c0f_4732dc0de194.slice. Sep 13 00:06:32.350392 containerd[1977]: time="2025-09-13T00:06:32.350354559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hn4jz,Uid:85496fec-fe0d-4f47-8c0f-4732dc0de194,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:32.408824 containerd[1977]: time="2025-09-13T00:06:32.408288826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-z9zb5,Uid:3ecbb697-c2d6-45f9-b896-7067a982618c,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:32.460334 containerd[1977]: time="2025-09-13T00:06:32.460179722Z" level=error msg="Failed to destroy network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.460856 containerd[1977]: time="2025-09-13T00:06:32.460774666Z" level=error msg="encountered an error cleaning up failed sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.461009 containerd[1977]: time="2025-09-13T00:06:32.460980475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hn4jz,Uid:85496fec-fe0d-4f47-8c0f-4732dc0de194,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.461352 kubelet[3167]: E0913 00:06:32.461313 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.461774 kubelet[3167]: E0913 00:06:32.461372 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:32.461774 kubelet[3167]: E0913 00:06:32.461396 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hn4jz" Sep 13 00:06:32.461774 kubelet[3167]: E0913 00:06:32.461484 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hn4jz_calico-system(85496fec-fe0d-4f47-8c0f-4732dc0de194)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hn4jz_calico-system(85496fec-fe0d-4f47-8c0f-4732dc0de194)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:32.504988 containerd[1977]: time="2025-09-13T00:06:32.504931697Z" level=error msg="Failed to destroy network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.505303 containerd[1977]: time="2025-09-13T00:06:32.505269433Z" level=error msg="encountered an error cleaning up failed sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.505386 containerd[1977]: time="2025-09-13T00:06:32.505333283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-z9zb5,Uid:3ecbb697-c2d6-45f9-b896-7067a982618c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.505665 kubelet[3167]: E0913 00:06:32.505589 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.505747 kubelet[3167]: E0913 00:06:32.505690 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-z9zb5" Sep 13 00:06:32.505747 kubelet[3167]: E0913 00:06:32.505719 3167 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-z9zb5" Sep 13 00:06:32.505853 kubelet[3167]: E0913 00:06:32.505776 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-z9zb5_calico-system(3ecbb697-c2d6-45f9-b896-7067a982618c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-z9zb5_calico-system(3ecbb697-c2d6-45f9-b896-7067a982618c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-z9zb5" podUID="3ecbb697-c2d6-45f9-b896-7067a982618c" Sep 13 00:06:32.672909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c-shm.mount: Deactivated successfully. Sep 13 00:06:32.673016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b-shm.mount: Deactivated successfully. Sep 13 00:06:32.733668 kubelet[3167]: I0913 00:06:32.733623 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:32.735147 containerd[1977]: time="2025-09-13T00:06:32.734729290Z" level=info msg="StopPodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\"" Sep 13 00:06:32.735147 containerd[1977]: time="2025-09-13T00:06:32.734897099Z" level=info msg="Ensure that sandbox de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b in task-service has been cleanup successfully" Sep 13 00:06:32.736220 kubelet[3167]: I0913 00:06:32.736200 3167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:32.737157 containerd[1977]: time="2025-09-13T00:06:32.736769057Z" level=info msg="StopPodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\"" Sep 13 00:06:32.737157 containerd[1977]: time="2025-09-13T00:06:32.736945741Z" level=info msg="Ensure that sandbox 8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c in task-service has been cleanup successfully" Sep 13 00:06:32.775866 containerd[1977]: time="2025-09-13T00:06:32.775819541Z" level=error msg="StopPodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" failed" error="failed to destroy network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.776054 kubelet[3167]: E0913 00:06:32.776022 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:32.776118 kubelet[3167]: E0913 00:06:32.776066 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c"} Sep 13 00:06:32.776118 kubelet[3167]: E0913 00:06:32.776106 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ecbb697-c2d6-45f9-b896-7067a982618c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:32.776213 kubelet[3167]: E0913 00:06:32.776128 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ecbb697-c2d6-45f9-b896-7067a982618c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-z9zb5" podUID="3ecbb697-c2d6-45f9-b896-7067a982618c" Sep 13 00:06:32.778042 containerd[1977]: time="2025-09-13T00:06:32.777996665Z" level=error msg="StopPodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" failed" error="failed to destroy network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:06:32.778242 kubelet[3167]: E0913 00:06:32.778207 3167 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:32.778333 kubelet[3167]: E0913 00:06:32.778253 3167 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b"} Sep 13 00:06:32.778333 kubelet[3167]: E0913 00:06:32.778295 3167 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85496fec-fe0d-4f47-8c0f-4732dc0de194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:06:32.778495 kubelet[3167]: E0913 00:06:32.778320 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85496fec-fe0d-4f47-8c0f-4732dc0de194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hn4jz" podUID="85496fec-fe0d-4f47-8c0f-4732dc0de194" Sep 13 00:06:39.498822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517306507.mount: Deactivated successfully. Sep 13 00:06:39.568970 containerd[1977]: time="2025-09-13T00:06:39.565720834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:06:39.568970 containerd[1977]: time="2025-09-13T00:06:39.568667002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:39.602187 containerd[1977]: time="2025-09-13T00:06:39.581079327Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:39.602187 containerd[1977]: time="2025-09-13T00:06:39.585400021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.960360542s" Sep 13 00:06:39.602187 containerd[1977]: time="2025-09-13T00:06:39.585449647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:06:39.602187 containerd[1977]: time="2025-09-13T00:06:39.585931711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:39.614725 containerd[1977]: time="2025-09-13T00:06:39.614677290Z" level=info msg="CreateContainer within sandbox \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:06:39.645799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125512809.mount: Deactivated successfully. Sep 13 00:06:39.649167 containerd[1977]: time="2025-09-13T00:06:39.649119867Z" level=info msg="CreateContainer within sandbox \"241232825182aef67fde7eb774fa454ec5099204b5af96269c22a5f0134e33b9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76\"" Sep 13 00:06:39.649863 containerd[1977]: time="2025-09-13T00:06:39.649781086Z" level=info msg="StartContainer for \"2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76\"" Sep 13 00:06:39.754482 kubelet[3167]: I0913 00:06:39.753260 3167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:06:39.839637 systemd[1]: Started cri-containerd-2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76.scope - libcontainer container 2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76. Sep 13 00:06:39.902726 containerd[1977]: time="2025-09-13T00:06:39.902598514Z" level=info msg="StartContainer for \"2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76\" returns successfully" Sep 13 00:06:40.279249 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:06:40.279383 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:06:42.235640 kernel: bpftool[4770]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:06:42.344684 containerd[1977]: time="2025-09-13T00:06:42.344628816Z" level=info msg="StopPodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\"" Sep 13 00:06:42.523345 kubelet[3167]: I0913 00:06:42.501951 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mktlg" podStartSLOduration=4.23770568 podStartE2EDuration="23.495451774s" podCreationTimestamp="2025-09-13 00:06:19 +0000 UTC" firstStartedPulling="2025-09-13 00:06:20.329112428 +0000 UTC m=+23.168082450" lastFinishedPulling="2025-09-13 00:06:39.586858522 +0000 UTC m=+42.425828544" observedRunningTime="2025-09-13 00:06:40.902941443 +0000 UTC m=+43.741911485" watchObservedRunningTime="2025-09-13 00:06:42.495451774 +0000 UTC m=+45.334421818" Sep 13 00:06:42.581152 containerd[1977]: time="2025-09-13T00:06:42.581118320Z" level=info msg="StopPodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\"" Sep 13 00:06:42.781141 (udev-worker)[4597]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:42.799105 systemd-networkd[1903]: vxlan.calico: Link UP Sep 13 00:06:42.799110 systemd-networkd[1903]: vxlan.calico: Gained carrier Sep 13 00:06:42.864899 (udev-worker)[4858]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:42.498 [INFO][4797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:42.501 [INFO][4797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" iface="eth0" netns="/var/run/netns/cni-24f8a78c-88fa-25ec-5fc3-5207fd710149" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:42.507 [INFO][4797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" iface="eth0" netns="/var/run/netns/cni-24f8a78c-88fa-25ec-5fc3-5207fd710149" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:42.510 [INFO][4797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" iface="eth0" netns="/var/run/netns/cni-24f8a78c-88fa-25ec-5fc3-5207fd710149" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:42.511 [INFO][4797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:42.511 [INFO][4797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.049 [INFO][4806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.055 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.055 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.078 [WARNING][4806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.078 [INFO][4806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.081 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:43.090999 containerd[1977]: 2025-09-13 00:06:43.085 [INFO][4797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:06:43.102284 systemd[1]: run-netns-cni\x2d24f8a78c\x2d88fa\x2d25ec\x2d5fc3\x2d5207fd710149.mount: Deactivated successfully. Sep 13 00:06:43.114915 containerd[1977]: time="2025-09-13T00:06:43.114845532Z" level=info msg="TearDown network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" successfully" Sep 13 00:06:43.115603 containerd[1977]: time="2025-09-13T00:06:43.115459499Z" level=info msg="StopPodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" returns successfully" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:42.705 [INFO][4828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:42.705 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" iface="eth0" netns="/var/run/netns/cni-f6dea386-fe2c-2ac3-00ed-9d21247891dc" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:42.706 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" iface="eth0" netns="/var/run/netns/cni-f6dea386-fe2c-2ac3-00ed-9d21247891dc" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:42.707 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" iface="eth0" netns="/var/run/netns/cni-f6dea386-fe2c-2ac3-00ed-9d21247891dc" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:42.707 [INFO][4828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:42.707 [INFO][4828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.052 [INFO][4840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.056 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.081 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.103 [WARNING][4840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.103 [INFO][4840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.110 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:43.127671 containerd[1977]: 2025-09-13 00:06:43.121 [INFO][4828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:43.159616 containerd[1977]: time="2025-09-13T00:06:43.130931510Z" level=info msg="TearDown network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" successfully" Sep 13 00:06:43.159616 containerd[1977]: time="2025-09-13T00:06:43.130970491Z" level=info msg="StopPodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" returns successfully" Sep 13 00:06:43.159616 containerd[1977]: time="2025-09-13T00:06:43.137389687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6hgsg,Uid:41318bb3-412c-4e63-8cef-3f95f4d6ba6e,Namespace:kube-system,Attempt:1,}" Sep 13 00:06:43.138350 systemd[1]: run-netns-cni\x2df6dea386\x2dfe2c\x2d2ac3\x2d00ed\x2d9d21247891dc.mount: Deactivated successfully. Sep 13 00:06:43.257840 kubelet[3167]: I0913 00:06:43.257623 3167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82335de8-dedd-4596-af21-5f90659e12f3-whisker-backend-key-pair\") pod \"82335de8-dedd-4596-af21-5f90659e12f3\" (UID: \"82335de8-dedd-4596-af21-5f90659e12f3\") " Sep 13 00:06:43.257840 kubelet[3167]: I0913 00:06:43.257701 3167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxvj7\" (UniqueName: \"kubernetes.io/projected/82335de8-dedd-4596-af21-5f90659e12f3-kube-api-access-fxvj7\") pod \"82335de8-dedd-4596-af21-5f90659e12f3\" (UID: \"82335de8-dedd-4596-af21-5f90659e12f3\") " Sep 13 00:06:43.321657 systemd[1]: var-lib-kubelet-pods-82335de8\x2ddedd\x2d4596\x2daf21\x2d5f90659e12f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxvj7.mount: Deactivated successfully. Sep 13 00:06:43.321801 systemd[1]: var-lib-kubelet-pods-82335de8\x2ddedd\x2d4596\x2daf21\x2d5f90659e12f3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:06:43.331507 kubelet[3167]: I0913 00:06:43.327606 3167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82335de8-dedd-4596-af21-5f90659e12f3-kube-api-access-fxvj7" (OuterVolumeSpecName: "kube-api-access-fxvj7") pod "82335de8-dedd-4596-af21-5f90659e12f3" (UID: "82335de8-dedd-4596-af21-5f90659e12f3"). InnerVolumeSpecName "kube-api-access-fxvj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:06:43.338077 kubelet[3167]: I0913 00:06:43.336181 3167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82335de8-dedd-4596-af21-5f90659e12f3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "82335de8-dedd-4596-af21-5f90659e12f3" (UID: "82335de8-dedd-4596-af21-5f90659e12f3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:06:43.365293 kubelet[3167]: I0913 00:06:43.365056 3167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82335de8-dedd-4596-af21-5f90659e12f3-whisker-ca-bundle\") pod \"82335de8-dedd-4596-af21-5f90659e12f3\" (UID: \"82335de8-dedd-4596-af21-5f90659e12f3\") " Sep 13 00:06:43.365293 kubelet[3167]: I0913 00:06:43.365181 3167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxvj7\" (UniqueName: \"kubernetes.io/projected/82335de8-dedd-4596-af21-5f90659e12f3-kube-api-access-fxvj7\") on node \"ip-172-31-17-100\" DevicePath \"\"" Sep 13 00:06:43.365293 kubelet[3167]: I0913 00:06:43.365202 3167 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82335de8-dedd-4596-af21-5f90659e12f3-whisker-backend-key-pair\") on node \"ip-172-31-17-100\" DevicePath \"\"" Sep 13 00:06:43.365603 kubelet[3167]: I0913 00:06:43.365537 3167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82335de8-dedd-4596-af21-5f90659e12f3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "82335de8-dedd-4596-af21-5f90659e12f3" (UID: "82335de8-dedd-4596-af21-5f90659e12f3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:06:43.417582 systemd-networkd[1903]: cali4533068f53e: Link UP Sep 13 00:06:43.418187 systemd-networkd[1903]: cali4533068f53e: Gained carrier Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.268 [INFO][4887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0 coredns-7c65d6cfc9- kube-system 41318bb3-412c-4e63-8cef-3f95f4d6ba6e 868 0 2025-09-13 00:06:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-100 coredns-7c65d6cfc9-6hgsg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4533068f53e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.273 [INFO][4887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.350 [INFO][4915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" HandleID="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.350 [INFO][4915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" HandleID="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000359ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-100", "pod":"coredns-7c65d6cfc9-6hgsg", "timestamp":"2025-09-13 00:06:43.350633501 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.351 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.351 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.351 [INFO][4915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.365 [INFO][4915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.379 [INFO][4915] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.383 [INFO][4915] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.385 [INFO][4915] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.388 [INFO][4915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.388 [INFO][4915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.390 [INFO][4915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83 Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.397 [INFO][4915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.403 [INFO][4915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.193/26] block=192.168.68.192/26 handle="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.403 [INFO][4915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.193/26] handle="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" host="ip-172-31-17-100" Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.403 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:43.439474 containerd[1977]: 2025-09-13 00:06:43.403 [INFO][4915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.193/26] IPv6=[] ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" HandleID="k8s-pod-network.8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.441778 containerd[1977]: 2025-09-13 00:06:43.406 [INFO][4887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41318bb3-412c-4e63-8cef-3f95f4d6ba6e", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"coredns-7c65d6cfc9-6hgsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4533068f53e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:43.441778 containerd[1977]: 2025-09-13 00:06:43.406 [INFO][4887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.193/32] ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.441778 containerd[1977]: 2025-09-13 00:06:43.406 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4533068f53e ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.441778 containerd[1977]: 2025-09-13 00:06:43.420 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.441778 containerd[1977]: 2025-09-13 00:06:43.422 [INFO][4887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41318bb3-412c-4e63-8cef-3f95f4d6ba6e", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83", Pod:"coredns-7c65d6cfc9-6hgsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4533068f53e", MAC:"2e:14:53:a5:0d:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:43.441778 containerd[1977]: 2025-09-13 00:06:43.436 [INFO][4887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6hgsg" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:06:43.465510 kubelet[3167]: I0913 00:06:43.465458 3167 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82335de8-dedd-4596-af21-5f90659e12f3-whisker-ca-bundle\") on node \"ip-172-31-17-100\" DevicePath \"\"" Sep 13 00:06:43.477269 containerd[1977]: time="2025-09-13T00:06:43.476944760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:43.477269 containerd[1977]: time="2025-09-13T00:06:43.477021445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:43.477269 containerd[1977]: time="2025-09-13T00:06:43.477046357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:43.479372 containerd[1977]: time="2025-09-13T00:06:43.478121747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:43.494918 systemd[1]: Started cri-containerd-8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83.scope - libcontainer container 8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83. Sep 13 00:06:43.555476 containerd[1977]: time="2025-09-13T00:06:43.555325121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6hgsg,Uid:41318bb3-412c-4e63-8cef-3f95f4d6ba6e,Namespace:kube-system,Attempt:1,} returns sandbox id \"8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83\"" Sep 13 00:06:43.559058 containerd[1977]: time="2025-09-13T00:06:43.558939259Z" level=info msg="CreateContainer within sandbox \"8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:43.610272 containerd[1977]: time="2025-09-13T00:06:43.610222240Z" level=info msg="CreateContainer within sandbox \"8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fce9e28bcf22e8ca95cbc08a6b1feeb3b035d5e341d23d416854ba854cbf9e34\"" Sep 13 00:06:43.614145 containerd[1977]: time="2025-09-13T00:06:43.613086579Z" level=info msg="StartContainer for \"fce9e28bcf22e8ca95cbc08a6b1feeb3b035d5e341d23d416854ba854cbf9e34\"" Sep 13 00:06:43.646624 systemd[1]: Started cri-containerd-fce9e28bcf22e8ca95cbc08a6b1feeb3b035d5e341d23d416854ba854cbf9e34.scope - libcontainer container fce9e28bcf22e8ca95cbc08a6b1feeb3b035d5e341d23d416854ba854cbf9e34. Sep 13 00:06:43.694442 containerd[1977]: time="2025-09-13T00:06:43.694366847Z" level=info msg="StartContainer for \"fce9e28bcf22e8ca95cbc08a6b1feeb3b035d5e341d23d416854ba854cbf9e34\" returns successfully" Sep 13 00:06:43.913719 systemd[1]: Removed slice kubepods-besteffort-pod82335de8_dedd_4596_af21_5f90659e12f3.slice - libcontainer container kubepods-besteffort-pod82335de8_dedd_4596_af21_5f90659e12f3.slice. Sep 13 00:06:43.919841 kubelet[3167]: I0913 00:06:43.919540 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6hgsg" podStartSLOduration=40.91951927 podStartE2EDuration="40.91951927s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:43.917736128 +0000 UTC m=+46.756706170" watchObservedRunningTime="2025-09-13 00:06:43.91951927 +0000 UTC m=+46.758489312" Sep 13 00:06:44.059922 systemd[1]: Created slice kubepods-besteffort-pod72487ff0_2060_4952_bd7b_8353e06ebaa1.slice - libcontainer container kubepods-besteffort-pod72487ff0_2060_4952_bd7b_8353e06ebaa1.slice. Sep 13 00:06:44.170139 kubelet[3167]: I0913 00:06:44.170014 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72487ff0-2060-4952-bd7b-8353e06ebaa1-whisker-backend-key-pair\") pod \"whisker-866695b4f-qwdmz\" (UID: \"72487ff0-2060-4952-bd7b-8353e06ebaa1\") " pod="calico-system/whisker-866695b4f-qwdmz" Sep 13 00:06:44.170139 kubelet[3167]: I0913 00:06:44.170061 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72487ff0-2060-4952-bd7b-8353e06ebaa1-whisker-ca-bundle\") pod \"whisker-866695b4f-qwdmz\" (UID: \"72487ff0-2060-4952-bd7b-8353e06ebaa1\") " pod="calico-system/whisker-866695b4f-qwdmz" Sep 13 00:06:44.170139 kubelet[3167]: I0913 00:06:44.170086 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc67n\" (UniqueName: \"kubernetes.io/projected/72487ff0-2060-4952-bd7b-8353e06ebaa1-kube-api-access-bc67n\") pod \"whisker-866695b4f-qwdmz\" (UID: \"72487ff0-2060-4952-bd7b-8353e06ebaa1\") " pod="calico-system/whisker-866695b4f-qwdmz" Sep 13 00:06:44.345238 containerd[1977]: time="2025-09-13T00:06:44.342957793Z" level=info msg="StopPodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\"" Sep 13 00:06:44.364449 containerd[1977]: time="2025-09-13T00:06:44.362909631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866695b4f-qwdmz,Uid:72487ff0-2060-4952-bd7b-8353e06ebaa1,Namespace:calico-system,Attempt:0,}" Sep 13 00:06:44.388127 systemd-networkd[1903]: vxlan.calico: Gained IPv6LL Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.425 [INFO][5026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.427 [INFO][5026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" iface="eth0" netns="/var/run/netns/cni-8aa8eecb-20f1-7aed-8b69-9c13130b5bfb" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.428 [INFO][5026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" iface="eth0" netns="/var/run/netns/cni-8aa8eecb-20f1-7aed-8b69-9c13130b5bfb" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.430 [INFO][5026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" iface="eth0" netns="/var/run/netns/cni-8aa8eecb-20f1-7aed-8b69-9c13130b5bfb" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.430 [INFO][5026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.430 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.480 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.480 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.480 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.488 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.489 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.493 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:44.499862 containerd[1977]: 2025-09-13 00:06:44.496 [INFO][5026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:44.501186 containerd[1977]: time="2025-09-13T00:06:44.499940675Z" level=info msg="TearDown network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" successfully" Sep 13 00:06:44.501186 containerd[1977]: time="2025-09-13T00:06:44.499974714Z" level=info msg="StopPodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" returns successfully" Sep 13 00:06:44.501186 containerd[1977]: time="2025-09-13T00:06:44.501164378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hn4jz,Uid:85496fec-fe0d-4f47-8c0f-4732dc0de194,Namespace:calico-system,Attempt:1,}" Sep 13 00:06:44.589541 systemd-networkd[1903]: calib7518559638: Link UP Sep 13 00:06:44.589836 systemd-networkd[1903]: calib7518559638: Gained carrier Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.475 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0 whisker-866695b4f- calico-system 72487ff0-2060-4952-bd7b-8353e06ebaa1 907 0 2025-09-13 00:06:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:866695b4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-100 whisker-866695b4f-qwdmz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib7518559638 [] [] }} ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.476 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.515 [INFO][5054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" HandleID="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Workload="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.515 [INFO][5054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" HandleID="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Workload="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-100", "pod":"whisker-866695b4f-qwdmz", "timestamp":"2025-09-13 00:06:44.515741799 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.515 [INFO][5054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.516 [INFO][5054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.516 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.525 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.532 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.547 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.551 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.555 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.555 [INFO][5054] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.557 [INFO][5054] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78 Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.563 [INFO][5054] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.574 [INFO][5054] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.194/26] block=192.168.68.192/26 handle="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.574 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.194/26] handle="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" host="ip-172-31-17-100" Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.575 [INFO][5054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:44.618197 containerd[1977]: 2025-09-13 00:06:44.575 [INFO][5054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.194/26] IPv6=[] ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" HandleID="k8s-pod-network.256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Workload="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.623345 containerd[1977]: 2025-09-13 00:06:44.581 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0", GenerateName:"whisker-866695b4f-", Namespace:"calico-system", SelfLink:"", UID:"72487ff0-2060-4952-bd7b-8353e06ebaa1", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"866695b4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"whisker-866695b4f-qwdmz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.68.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib7518559638", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:44.623345 containerd[1977]: 2025-09-13 00:06:44.581 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.194/32] ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.623345 containerd[1977]: 2025-09-13 00:06:44.582 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7518559638 ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.623345 containerd[1977]: 2025-09-13 00:06:44.591 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.623345 containerd[1977]: 2025-09-13 00:06:44.594 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0", GenerateName:"whisker-866695b4f-", Namespace:"calico-system", SelfLink:"", UID:"72487ff0-2060-4952-bd7b-8353e06ebaa1", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"866695b4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78", Pod:"whisker-866695b4f-qwdmz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.68.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib7518559638", MAC:"9a:40:28:3e:53:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:44.623345 containerd[1977]: 2025-09-13 00:06:44.611 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78" Namespace="calico-system" Pod="whisker-866695b4f-qwdmz" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--866695b4f--qwdmz-eth0" Sep 13 00:06:44.684226 containerd[1977]: time="2025-09-13T00:06:44.683595646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:44.684226 containerd[1977]: time="2025-09-13T00:06:44.683815553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:44.684664 containerd[1977]: time="2025-09-13T00:06:44.684577307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.686689 containerd[1977]: time="2025-09-13T00:06:44.685481519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.711659 systemd[1]: Started cri-containerd-256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78.scope - libcontainer container 256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78. Sep 13 00:06:44.751714 systemd-networkd[1903]: cali171ab8f48e2: Link UP Sep 13 00:06:44.754494 systemd-networkd[1903]: cali171ab8f48e2: Gained carrier Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.592 [INFO][5061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0 csi-node-driver- calico-system 85496fec-fe0d-4f47-8c0f-4732dc0de194 910 0 2025-09-13 00:06:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-100 csi-node-driver-hn4jz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali171ab8f48e2 [] [] }} ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.592 [INFO][5061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.655 [INFO][5079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" HandleID="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.655 [INFO][5079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" HandleID="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f990), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-100", "pod":"csi-node-driver-hn4jz", "timestamp":"2025-09-13 00:06:44.655304924 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.655 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.655 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.655 [INFO][5079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.666 [INFO][5079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.673 [INFO][5079] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.683 [INFO][5079] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.689 [INFO][5079] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.696 [INFO][5079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.697 [INFO][5079] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.702 [INFO][5079] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0 Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.730 [INFO][5079] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.743 [INFO][5079] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.195/26] block=192.168.68.192/26 handle="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.745 [INFO][5079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.195/26] handle="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" host="ip-172-31-17-100" Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.745 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:44.788330 containerd[1977]: 2025-09-13 00:06:44.745 [INFO][5079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.195/26] IPv6=[] ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" HandleID="k8s-pod-network.1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.789960 containerd[1977]: 2025-09-13 00:06:44.748 [INFO][5061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85496fec-fe0d-4f47-8c0f-4732dc0de194", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"csi-node-driver-hn4jz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali171ab8f48e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:44.789960 containerd[1977]: 2025-09-13 00:06:44.748 [INFO][5061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.195/32] ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.789960 containerd[1977]: 2025-09-13 00:06:44.748 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali171ab8f48e2 ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.789960 containerd[1977]: 2025-09-13 00:06:44.755 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.789960 containerd[1977]: 2025-09-13 00:06:44.756 [INFO][5061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85496fec-fe0d-4f47-8c0f-4732dc0de194", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0", Pod:"csi-node-driver-hn4jz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali171ab8f48e2", MAC:"5e:99:d1:e4:a6:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:44.789960 containerd[1977]: 2025-09-13 00:06:44.780 [INFO][5061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0" Namespace="calico-system" Pod="csi-node-driver-hn4jz" WorkloadEndpoint="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:44.828112 containerd[1977]: time="2025-09-13T00:06:44.827048962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:44.828112 containerd[1977]: time="2025-09-13T00:06:44.827128420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:44.828112 containerd[1977]: time="2025-09-13T00:06:44.827152907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.828112 containerd[1977]: time="2025-09-13T00:06:44.827289612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:44.851486 containerd[1977]: time="2025-09-13T00:06:44.851373319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866695b4f-qwdmz,Uid:72487ff0-2060-4952-bd7b-8353e06ebaa1,Namespace:calico-system,Attempt:0,} returns sandbox id \"256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78\"" Sep 13 00:06:44.857723 containerd[1977]: time="2025-09-13T00:06:44.855641404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:06:44.856675 systemd[1]: Started cri-containerd-1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0.scope - libcontainer container 1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0. Sep 13 00:06:44.916568 containerd[1977]: time="2025-09-13T00:06:44.916524301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hn4jz,Uid:85496fec-fe0d-4f47-8c0f-4732dc0de194,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0\"" Sep 13 00:06:45.104259 systemd[1]: run-netns-cni\x2d8aa8eecb\x2d20f1\x2d7aed\x2d8b69\x2d9c13130b5bfb.mount: Deactivated successfully. Sep 13 00:06:45.217766 systemd-networkd[1903]: cali4533068f53e: Gained IPv6LL Sep 13 00:06:45.345689 kubelet[3167]: I0913 00:06:45.345649 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82335de8-dedd-4596-af21-5f90659e12f3" path="/var/lib/kubelet/pods/82335de8-dedd-4596-af21-5f90659e12f3/volumes" Sep 13 00:06:46.026887 systemd[1]: Started sshd@7-172.31.17.100:22-139.178.89.65:58250.service - OpenSSH per-connection server daemon (139.178.89.65:58250). Sep 13 00:06:46.117489 systemd-networkd[1903]: calib7518559638: Gained IPv6LL Sep 13 00:06:46.243883 sshd[5191]: Accepted publickey for core from 139.178.89.65 port 58250 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:46.246980 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:46.255269 systemd-logind[1964]: New session 8 of user core. Sep 13 00:06:46.263676 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:06:46.344983 containerd[1977]: time="2025-09-13T00:06:46.344848317Z" level=info msg="StopPodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\"" Sep 13 00:06:46.350968 containerd[1977]: time="2025-09-13T00:06:46.350907450Z" level=info msg="StopPodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\"" Sep 13 00:06:46.351539 containerd[1977]: time="2025-09-13T00:06:46.351141887Z" level=info msg="StopPodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\"" Sep 13 00:06:46.354692 containerd[1977]: time="2025-09-13T00:06:46.354651322Z" level=info msg="StopPodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\"" Sep 13 00:06:46.356068 containerd[1977]: time="2025-09-13T00:06:46.355844366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:46.360213 containerd[1977]: time="2025-09-13T00:06:46.360153640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:06:46.367513 containerd[1977]: time="2025-09-13T00:06:46.367307502Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:46.406832 containerd[1977]: time="2025-09-13T00:06:46.406524041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:46.408160 containerd[1977]: time="2025-09-13T00:06:46.407654829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.551976878s" Sep 13 00:06:46.408160 containerd[1977]: time="2025-09-13T00:06:46.407694448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:06:46.413285 containerd[1977]: time="2025-09-13T00:06:46.412683358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:06:46.419424 containerd[1977]: time="2025-09-13T00:06:46.418761877Z" level=info msg="CreateContainer within sandbox \"256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:06:46.454383 containerd[1977]: time="2025-09-13T00:06:46.454339677Z" level=info msg="CreateContainer within sandbox \"256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b3b7f6c3b1c014d6c6cdd3ac8e5c715523d1423fa878f65b06d37655d2f4783f\"" Sep 13 00:06:46.456266 containerd[1977]: time="2025-09-13T00:06:46.455611455Z" level=info msg="StartContainer for \"b3b7f6c3b1c014d6c6cdd3ac8e5c715523d1423fa878f65b06d37655d2f4783f\"" Sep 13 00:06:46.562942 systemd-networkd[1903]: cali171ab8f48e2: Gained IPv6LL Sep 13 00:06:46.636609 systemd[1]: Started cri-containerd-b3b7f6c3b1c014d6c6cdd3ac8e5c715523d1423fa878f65b06d37655d2f4783f.scope - libcontainer container b3b7f6c3b1c014d6c6cdd3ac8e5c715523d1423fa878f65b06d37655d2f4783f. Sep 13 00:06:46.667075 systemd[1]: run-containerd-runc-k8s.io-b3b7f6c3b1c014d6c6cdd3ac8e5c715523d1423fa878f65b06d37655d2f4783f-runc.UhQ1x4.mount: Deactivated successfully. Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.625 [INFO][5240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.625 [INFO][5240] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" iface="eth0" netns="/var/run/netns/cni-dd2b071e-c6c7-41c4-02e2-1c19f9946881" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.627 [INFO][5240] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" iface="eth0" netns="/var/run/netns/cni-dd2b071e-c6c7-41c4-02e2-1c19f9946881" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.627 [INFO][5240] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" iface="eth0" netns="/var/run/netns/cni-dd2b071e-c6c7-41c4-02e2-1c19f9946881" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.627 [INFO][5240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.627 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.791 [INFO][5290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.792 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.792 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.812 [WARNING][5290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.812 [INFO][5290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.825 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:46.856391 containerd[1977]: 2025-09-13 00:06:46.848 [INFO][5240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:46.861847 containerd[1977]: time="2025-09-13T00:06:46.859557123Z" level=info msg="TearDown network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" successfully" Sep 13 00:06:46.861847 containerd[1977]: time="2025-09-13T00:06:46.859601062Z" level=info msg="StopPodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" returns successfully" Sep 13 00:06:46.867385 systemd[1]: run-netns-cni\x2ddd2b071e\x2dc6c7\x2d41c4\x2d02e2\x2d1c19f9946881.mount: Deactivated successfully. Sep 13 00:06:46.872965 containerd[1977]: time="2025-09-13T00:06:46.869030672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-z9zb5,Uid:3ecbb697-c2d6-45f9-b896-7067a982618c,Namespace:calico-system,Attempt:1,}" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.687 [INFO][5241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.687 [INFO][5241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" iface="eth0" netns="/var/run/netns/cni-fed2d29d-1646-516f-b371-deea6682eb8f" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.687 [INFO][5241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" iface="eth0" netns="/var/run/netns/cni-fed2d29d-1646-516f-b371-deea6682eb8f" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.687 [INFO][5241] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" iface="eth0" netns="/var/run/netns/cni-fed2d29d-1646-516f-b371-deea6682eb8f" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.687 [INFO][5241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.687 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.856 [INFO][5300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.873 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.873 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.910 [WARNING][5300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.911 [INFO][5300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.924 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:46.972366 containerd[1977]: 2025-09-13 00:06:46.953 [INFO][5241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:06:46.972366 containerd[1977]: time="2025-09-13T00:06:46.972306192Z" level=info msg="TearDown network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" successfully" Sep 13 00:06:46.977643 containerd[1977]: time="2025-09-13T00:06:46.972379471Z" level=info msg="StopPodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" returns successfully" Sep 13 00:06:46.977643 containerd[1977]: time="2025-09-13T00:06:46.976805577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lbdr4,Uid:ac2a539b-0e55-4a98-8350-b301bacb6e1b,Namespace:kube-system,Attempt:1,}" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.677 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.677 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" iface="eth0" netns="/var/run/netns/cni-b3b95ed0-9fa7-9929-e91d-f380a5c21dab" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.680 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" iface="eth0" netns="/var/run/netns/cni-b3b95ed0-9fa7-9929-e91d-f380a5c21dab" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.682 [INFO][5236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" iface="eth0" netns="/var/run/netns/cni-b3b95ed0-9fa7-9929-e91d-f380a5c21dab" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.682 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.682 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.894 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.894 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.940 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.979 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.979 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:46.994 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:47.016748 containerd[1977]: 2025-09-13 00:06:47.006 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:47.020019 containerd[1977]: time="2025-09-13T00:06:47.017392058Z" level=info msg="TearDown network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" successfully" Sep 13 00:06:47.020019 containerd[1977]: time="2025-09-13T00:06:47.017591730Z" level=info msg="StopPodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" returns successfully" Sep 13 00:06:47.023925 containerd[1977]: time="2025-09-13T00:06:47.023865993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-nl9sx,Uid:cf5b23e2-56dc-49fe-9c10-0589cb6cd48a,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.754 [INFO][5242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.754 [INFO][5242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" iface="eth0" netns="/var/run/netns/cni-91d11076-6cbe-93e3-f0f1-f407225233a4" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.760 [INFO][5242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" iface="eth0" netns="/var/run/netns/cni-91d11076-6cbe-93e3-f0f1-f407225233a4" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.763 [INFO][5242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" iface="eth0" netns="/var/run/netns/cni-91d11076-6cbe-93e3-f0f1-f407225233a4" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.766 [INFO][5242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.766 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.971 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.984 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:46.992 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:47.046 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:47.046 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:47.056 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:47.069301 containerd[1977]: 2025-09-13 00:06:47.062 [INFO][5242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:06:47.073976 containerd[1977]: time="2025-09-13T00:06:47.072977102Z" level=info msg="TearDown network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" successfully" Sep 13 00:06:47.073976 containerd[1977]: time="2025-09-13T00:06:47.073055597Z" level=info msg="StopPodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" returns successfully" Sep 13 00:06:47.075183 containerd[1977]: time="2025-09-13T00:06:47.075061194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58df989bd8-4mk2j,Uid:c459cf7c-3ec3-4406-97f9-a4a7ecf9e253,Namespace:calico-system,Attempt:1,}" Sep 13 00:06:47.342352 containerd[1977]: time="2025-09-13T00:06:47.341535702Z" level=info msg="StartContainer for \"b3b7f6c3b1c014d6c6cdd3ac8e5c715523d1423fa878f65b06d37655d2f4783f\" returns successfully" Sep 13 00:06:47.359435 containerd[1977]: time="2025-09-13T00:06:47.358695260Z" level=info msg="StopPodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\"" Sep 13 00:06:47.462052 systemd[1]: run-netns-cni\x2dfed2d29d\x2d1646\x2d516f\x2db371\x2ddeea6682eb8f.mount: Deactivated successfully. Sep 13 00:06:47.464766 systemd[1]: run-netns-cni\x2db3b95ed0\x2d9fa7\x2d9929\x2de91d\x2df380a5c21dab.mount: Deactivated successfully. Sep 13 00:06:47.464852 systemd[1]: run-netns-cni\x2d91d11076\x2d6cbe\x2d93e3\x2df0f1\x2df407225233a4.mount: Deactivated successfully. Sep 13 00:06:47.537523 sshd[5191]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:47.555940 systemd[1]: sshd@7-172.31.17.100:22-139.178.89.65:58250.service: Deactivated successfully. Sep 13 00:06:47.561837 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:06:47.566350 systemd-logind[1964]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:06:47.574589 systemd-logind[1964]: Removed session 8. Sep 13 00:06:47.678885 systemd-networkd[1903]: cali8c6e05d2ed2: Link UP Sep 13 00:06:47.679825 systemd-networkd[1903]: cali8c6e05d2ed2: Gained carrier Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.092 [INFO][5329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0 goldmane-7988f88666- calico-system 3ecbb697-c2d6-45f9-b896-7067a982618c 963 0 2025-09-13 00:06:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-100 goldmane-7988f88666-z9zb5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8c6e05d2ed2 [] [] }} ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.095 [INFO][5329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.341 [INFO][5358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" HandleID="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.341 [INFO][5358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" HandleID="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333540), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-100", "pod":"goldmane-7988f88666-z9zb5", "timestamp":"2025-09-13 00:06:47.341565511 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.341 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.342 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.342 [INFO][5358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.430 [INFO][5358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.520 [INFO][5358] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.577 [INFO][5358] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.592 [INFO][5358] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.601 [INFO][5358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.601 [INFO][5358] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.605 [INFO][5358] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436 Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.622 [INFO][5358] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.645 [INFO][5358] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.196/26] block=192.168.68.192/26 handle="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.646 [INFO][5358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.196/26] handle="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" host="ip-172-31-17-100" Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.646 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:47.767806 containerd[1977]: 2025-09-13 00:06:47.647 [INFO][5358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.196/26] IPv6=[] ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" HandleID="k8s-pod-network.6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.770877 containerd[1977]: 2025-09-13 00:06:47.664 [INFO][5329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3ecbb697-c2d6-45f9-b896-7067a982618c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"goldmane-7988f88666-z9zb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c6e05d2ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:47.770877 containerd[1977]: 2025-09-13 00:06:47.664 [INFO][5329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.196/32] ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.770877 containerd[1977]: 2025-09-13 00:06:47.665 [INFO][5329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c6e05d2ed2 ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.770877 containerd[1977]: 2025-09-13 00:06:47.683 [INFO][5329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.770877 containerd[1977]: 2025-09-13 00:06:47.685 [INFO][5329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3ecbb697-c2d6-45f9-b896-7067a982618c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436", Pod:"goldmane-7988f88666-z9zb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c6e05d2ed2", MAC:"5a:22:ad:5d:65:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:47.770877 containerd[1977]: 2025-09-13 00:06:47.755 [INFO][5329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436" Namespace="calico-system" Pod="goldmane-7988f88666-z9zb5" WorkloadEndpoint="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:47.857215 containerd[1977]: time="2025-09-13T00:06:47.855809356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:47.859708 containerd[1977]: time="2025-09-13T00:06:47.859594616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:47.860807 containerd[1977]: time="2025-09-13T00:06:47.860122101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:47.861599 containerd[1977]: time="2025-09-13T00:06:47.861474412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:47.903380 systemd-networkd[1903]: cali6260e7fe204: Link UP Sep 13 00:06:47.903651 systemd-networkd[1903]: cali6260e7fe204: Gained carrier Sep 13 00:06:47.931840 systemd[1]: run-containerd-runc-k8s.io-6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436-runc.WamnjM.mount: Deactivated successfully. Sep 13 00:06:47.964166 systemd[1]: Started cri-containerd-6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436.scope - libcontainer container 6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436. Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.346 [INFO][5340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0 coredns-7c65d6cfc9- kube-system ac2a539b-0e55-4a98-8350-b301bacb6e1b 965 0 2025-09-13 00:06:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-100 coredns-7c65d6cfc9-lbdr4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6260e7fe204 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.351 [INFO][5340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.581 [INFO][5398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" HandleID="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.582 [INFO][5398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" HandleID="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-100", "pod":"coredns-7c65d6cfc9-lbdr4", "timestamp":"2025-09-13 00:06:47.581079676 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.582 [INFO][5398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.646 [INFO][5398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.647 [INFO][5398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.671 [INFO][5398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.743 [INFO][5398] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.765 [INFO][5398] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.778 [INFO][5398] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.790 [INFO][5398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.791 [INFO][5398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.807 [INFO][5398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14 Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.830 [INFO][5398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.860 [INFO][5398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.197/26] block=192.168.68.192/26 handle="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.860 [INFO][5398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.197/26] handle="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" host="ip-172-31-17-100" Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.862 [INFO][5398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:47.983118 containerd[1977]: 2025-09-13 00:06:47.862 [INFO][5398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.197/26] IPv6=[] ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" HandleID="k8s-pod-network.603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:47.986388 containerd[1977]: 2025-09-13 00:06:47.893 [INFO][5340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ac2a539b-0e55-4a98-8350-b301bacb6e1b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"coredns-7c65d6cfc9-lbdr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6260e7fe204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:47.986388 containerd[1977]: 2025-09-13 00:06:47.895 [INFO][5340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.197/32] ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:47.986388 containerd[1977]: 2025-09-13 00:06:47.895 [INFO][5340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6260e7fe204 ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:47.986388 containerd[1977]: 2025-09-13 00:06:47.902 [INFO][5340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:47.986388 containerd[1977]: 2025-09-13 00:06:47.903 [INFO][5340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ac2a539b-0e55-4a98-8350-b301bacb6e1b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14", Pod:"coredns-7c65d6cfc9-lbdr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6260e7fe204", MAC:"06:0b:09:42:b3:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:47.986388 containerd[1977]: 2025-09-13 00:06:47.970 [INFO][5340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lbdr4" WorkloadEndpoint="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:06:48.095452 containerd[1977]: time="2025-09-13T00:06:48.093460838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:48.095452 containerd[1977]: time="2025-09-13T00:06:48.093541118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:48.095452 containerd[1977]: time="2025-09-13T00:06:48.093565041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.095452 containerd[1977]: time="2025-09-13T00:06:48.093696949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.114261 systemd-networkd[1903]: cali687180c5401: Link UP Sep 13 00:06:48.116894 systemd-networkd[1903]: cali687180c5401: Gained carrier Sep 13 00:06:48.183677 systemd[1]: Started cri-containerd-603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14.scope - libcontainer container 603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14. Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.605 [INFO][5355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0 calico-apiserver-56596dc84c- calico-apiserver cf5b23e2-56dc-49fe-9c10-0589cb6cd48a 964 0 2025-09-13 00:06:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56596dc84c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-100 calico-apiserver-56596dc84c-nl9sx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali687180c5401 [] [] }} ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.606 [INFO][5355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.724 [INFO][5421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" HandleID="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.724 [INFO][5421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" HandleID="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-100", "pod":"calico-apiserver-56596dc84c-nl9sx", "timestamp":"2025-09-13 00:06:47.724044086 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.724 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.862 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.863 [INFO][5421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.931 [INFO][5421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.970 [INFO][5421] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.990 [INFO][5421] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.993 [INFO][5421] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.997 [INFO][5421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:47.997 [INFO][5421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:48.000 [INFO][5421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:48.028 [INFO][5421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:48.063 [INFO][5421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.198/26] block=192.168.68.192/26 handle="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:48.064 [INFO][5421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.198/26] handle="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" host="ip-172-31-17-100" Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:48.065 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:48.202207 containerd[1977]: 2025-09-13 00:06:48.065 [INFO][5421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.198/26] IPv6=[] ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" HandleID="k8s-pod-network.3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.237786 containerd[1977]: 2025-09-13 00:06:48.080 [INFO][5355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"calico-apiserver-56596dc84c-nl9sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali687180c5401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:48.237786 containerd[1977]: 2025-09-13 00:06:48.085 [INFO][5355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.198/32] ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.237786 containerd[1977]: 2025-09-13 00:06:48.087 [INFO][5355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali687180c5401 ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.237786 containerd[1977]: 2025-09-13 00:06:48.123 [INFO][5355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.237786 containerd[1977]: 2025-09-13 00:06:48.140 [INFO][5355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a", Pod:"calico-apiserver-56596dc84c-nl9sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali687180c5401", MAC:"3e:a1:df:14:d1:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:48.237786 containerd[1977]: 2025-09-13 00:06:48.197 [INFO][5355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-nl9sx" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:48.279715 systemd-networkd[1903]: calicd4bd986752: Link UP Sep 13 00:06:48.285367 systemd-networkd[1903]: calicd4bd986752: Gained carrier Sep 13 00:06:48.321083 containerd[1977]: time="2025-09-13T00:06:48.319167274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lbdr4,Uid:ac2a539b-0e55-4a98-8350-b301bacb6e1b,Namespace:kube-system,Attempt:1,} returns sandbox id \"603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14\"" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:47.760 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:47.765 [INFO][5400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" iface="eth0" netns="/var/run/netns/cni-48231a00-1dfd-253d-b8ad-82f60d7ed5c9" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:47.766 [INFO][5400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" iface="eth0" netns="/var/run/netns/cni-48231a00-1dfd-253d-b8ad-82f60d7ed5c9" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:47.766 [INFO][5400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" iface="eth0" netns="/var/run/netns/cni-48231a00-1dfd-253d-b8ad-82f60d7ed5c9" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:47.767 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:47.767 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.051 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.056 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.235 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.286 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.286 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.299 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:48.339606 containerd[1977]: 2025-09-13 00:06:48.322 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:48.340825 containerd[1977]: time="2025-09-13T00:06:48.340772340Z" level=info msg="TearDown network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" successfully" Sep 13 00:06:48.340825 containerd[1977]: time="2025-09-13T00:06:48.340816964Z" level=info msg="StopPodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" returns successfully" Sep 13 00:06:48.343138 containerd[1977]: time="2025-09-13T00:06:48.342731173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:48.343138 containerd[1977]: time="2025-09-13T00:06:48.342816665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:48.343138 containerd[1977]: time="2025-09-13T00:06:48.342849052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.343138 containerd[1977]: time="2025-09-13T00:06:48.342954895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.344823 containerd[1977]: time="2025-09-13T00:06:48.343491224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-7sb8x,Uid:54ae31d6-2edb-4026-9279-0fba6bb34176,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:06:48.363373 containerd[1977]: time="2025-09-13T00:06:48.363230625Z" level=info msg="CreateContainer within sandbox \"603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:06:48.456651 systemd[1]: run-netns-cni\x2d48231a00\x2d1dfd\x2d253d\x2db8ad\x2d82f60d7ed5c9.mount: Deactivated successfully. Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:47.608 [INFO][5369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0 calico-kube-controllers-58df989bd8- calico-system c459cf7c-3ec3-4406-97f9-a4a7ecf9e253 966 0 2025-09-13 00:06:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58df989bd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-100 calico-kube-controllers-58df989bd8-4mk2j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicd4bd986752 [] [] }} ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:47.609 [INFO][5369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:47.775 [INFO][5420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" HandleID="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:47.780 [INFO][5420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" HandleID="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000372050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-100", "pod":"calico-kube-controllers-58df989bd8-4mk2j", "timestamp":"2025-09-13 00:06:47.77406373 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:47.781 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.068 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.069 [INFO][5420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.092 [INFO][5420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.121 [INFO][5420] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.135 [INFO][5420] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.141 [INFO][5420] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.155 [INFO][5420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.157 [INFO][5420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.172 [INFO][5420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.205 [INFO][5420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.233 [INFO][5420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.199/26] block=192.168.68.192/26 handle="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.234 [INFO][5420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.199/26] handle="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" host="ip-172-31-17-100" Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.235 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:48.462172 containerd[1977]: 2025-09-13 00:06:48.236 [INFO][5420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.199/26] IPv6=[] ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" HandleID="k8s-pod-network.172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.463723 containerd[1977]: 2025-09-13 00:06:48.261 [INFO][5369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0", GenerateName:"calico-kube-controllers-58df989bd8-", Namespace:"calico-system", SelfLink:"", UID:"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58df989bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"calico-kube-controllers-58df989bd8-4mk2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicd4bd986752", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:48.463723 containerd[1977]: 2025-09-13 00:06:48.261 [INFO][5369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.199/32] ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.463723 containerd[1977]: 2025-09-13 00:06:48.261 [INFO][5369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd4bd986752 ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.463723 containerd[1977]: 2025-09-13 00:06:48.291 [INFO][5369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.463723 containerd[1977]: 2025-09-13 00:06:48.301 [INFO][5369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0", GenerateName:"calico-kube-controllers-58df989bd8-", Namespace:"calico-system", SelfLink:"", UID:"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58df989bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc", Pod:"calico-kube-controllers-58df989bd8-4mk2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicd4bd986752", MAC:"56:af:37:6a:f2:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:48.463723 containerd[1977]: 2025-09-13 00:06:48.373 [INFO][5369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc" Namespace="calico-system" Pod="calico-kube-controllers-58df989bd8-4mk2j" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:06:48.506634 systemd[1]: run-containerd-runc-k8s.io-3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a-runc.cHobv8.mount: Deactivated successfully. Sep 13 00:06:48.518637 systemd[1]: Started cri-containerd-3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a.scope - libcontainer container 3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a. Sep 13 00:06:48.545346 containerd[1977]: time="2025-09-13T00:06:48.544801685Z" level=info msg="CreateContainer within sandbox \"603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f0bfdcd0add3c5ff90db30ebea7eed52843b1f74d6563922a29c9a861acc1b1\"" Sep 13 00:06:48.550164 containerd[1977]: time="2025-09-13T00:06:48.550118577Z" level=info msg="StartContainer for \"7f0bfdcd0add3c5ff90db30ebea7eed52843b1f74d6563922a29c9a861acc1b1\"" Sep 13 00:06:48.751038 containerd[1977]: time="2025-09-13T00:06:48.749549544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:48.751038 containerd[1977]: time="2025-09-13T00:06:48.750861634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:48.751038 containerd[1977]: time="2025-09-13T00:06:48.750883081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.751879 containerd[1977]: time="2025-09-13T00:06:48.751000720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.759440 containerd[1977]: time="2025-09-13T00:06:48.759382398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-z9zb5,Uid:3ecbb697-c2d6-45f9-b896-7067a982618c,Namespace:calico-system,Attempt:1,} returns sandbox id \"6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436\"" Sep 13 00:06:48.783605 containerd[1977]: time="2025-09-13T00:06:48.783470869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-nl9sx,Uid:cf5b23e2-56dc-49fe-9c10-0589cb6cd48a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a\"" Sep 13 00:06:48.815247 systemd[1]: Started cri-containerd-172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc.scope - libcontainer container 172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc. Sep 13 00:06:48.844684 systemd[1]: Started cri-containerd-7f0bfdcd0add3c5ff90db30ebea7eed52843b1f74d6563922a29c9a861acc1b1.scope - libcontainer container 7f0bfdcd0add3c5ff90db30ebea7eed52843b1f74d6563922a29c9a861acc1b1. Sep 13 00:06:48.942952 containerd[1977]: time="2025-09-13T00:06:48.941786182Z" level=info msg="StartContainer for \"7f0bfdcd0add3c5ff90db30ebea7eed52843b1f74d6563922a29c9a861acc1b1\" returns successfully" Sep 13 00:06:49.183608 systemd-networkd[1903]: cali5e9cd4161ee: Link UP Sep 13 00:06:49.186520 systemd-networkd[1903]: cali5e9cd4161ee: Gained carrier Sep 13 00:06:49.226448 kubelet[3167]: I0913 00:06:49.225211 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lbdr4" podStartSLOduration=46.22518313 podStartE2EDuration="46.22518313s" podCreationTimestamp="2025-09-13 00:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:49.110528082 +0000 UTC m=+51.949498124" watchObservedRunningTime="2025-09-13 00:06:49.22518313 +0000 UTC m=+52.064153172" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:48.933 [INFO][5591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0 calico-apiserver-56596dc84c- calico-apiserver 54ae31d6-2edb-4026-9279-0fba6bb34176 979 0 2025-09-13 00:06:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56596dc84c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-100 calico-apiserver-56596dc84c-7sb8x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e9cd4161ee [] [] }} ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:48.937 [INFO][5591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.012 [INFO][5699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" HandleID="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.012 [INFO][5699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" HandleID="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-100", "pod":"calico-apiserver-56596dc84c-7sb8x", "timestamp":"2025-09-13 00:06:49.012225895 +0000 UTC"}, Hostname:"ip-172-31-17-100", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.012 [INFO][5699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.012 [INFO][5699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.012 [INFO][5699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-100' Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.029 [INFO][5699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.037 [INFO][5699] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.055 [INFO][5699] ipam/ipam.go 511: Trying affinity for 192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.065 [INFO][5699] ipam/ipam.go 158: Attempting to load block cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.073 [INFO][5699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.073 [INFO][5699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.091 [INFO][5699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097 Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.110 [INFO][5699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.149 [INFO][5699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.68.200/26] block=192.168.68.192/26 handle="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.149 [INFO][5699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.68.200/26] handle="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" host="ip-172-31-17-100" Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.149 [INFO][5699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:49.236948 containerd[1977]: 2025-09-13 00:06:49.149 [INFO][5699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.200/26] IPv6=[] ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" HandleID="k8s-pod-network.acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.241967 containerd[1977]: 2025-09-13 00:06:49.158 [INFO][5591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"54ae31d6-2edb-4026-9279-0fba6bb34176", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"", Pod:"calico-apiserver-56596dc84c-7sb8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e9cd4161ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:49.241967 containerd[1977]: 2025-09-13 00:06:49.164 [INFO][5591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.68.200/32] ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.241967 containerd[1977]: 2025-09-13 00:06:49.164 [INFO][5591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e9cd4161ee ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.241967 containerd[1977]: 2025-09-13 00:06:49.190 [INFO][5591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.241967 containerd[1977]: 2025-09-13 00:06:49.191 [INFO][5591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"54ae31d6-2edb-4026-9279-0fba6bb34176", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097", Pod:"calico-apiserver-56596dc84c-7sb8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e9cd4161ee", MAC:"da:55:47:6f:6f:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:49.241967 containerd[1977]: 2025-09-13 00:06:49.223 [INFO][5591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097" Namespace="calico-apiserver" Pod="calico-apiserver-56596dc84c-7sb8x" WorkloadEndpoint="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:49.281345 containerd[1977]: time="2025-09-13T00:06:49.281304659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58df989bd8-4mk2j,Uid:c459cf7c-3ec3-4406-97f9-a4a7ecf9e253,Namespace:calico-system,Attempt:1,} returns sandbox id \"172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc\"" Sep 13 00:06:49.313565 systemd-networkd[1903]: cali8c6e05d2ed2: Gained IPv6LL Sep 13 00:06:49.353227 containerd[1977]: time="2025-09-13T00:06:49.341372545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:49.353227 containerd[1977]: time="2025-09-13T00:06:49.352227466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:49.353227 containerd[1977]: time="2025-09-13T00:06:49.352249604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:49.358436 containerd[1977]: time="2025-09-13T00:06:49.352402221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:49.410622 systemd[1]: Started cri-containerd-acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097.scope - libcontainer container acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097. Sep 13 00:06:49.442503 containerd[1977]: time="2025-09-13T00:06:49.439659795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:49.446274 containerd[1977]: time="2025-09-13T00:06:49.446100995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:06:49.451567 containerd[1977]: time="2025-09-13T00:06:49.450454056Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:49.457044 containerd[1977]: time="2025-09-13T00:06:49.456402239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:49.457693 containerd[1977]: time="2025-09-13T00:06:49.456990134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.044266423s" Sep 13 00:06:49.458247 containerd[1977]: time="2025-09-13T00:06:49.458224309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:06:49.460562 containerd[1977]: time="2025-09-13T00:06:49.460532111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:06:49.462737 containerd[1977]: time="2025-09-13T00:06:49.462589022Z" level=info msg="CreateContainer within sandbox \"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:06:49.483629 containerd[1977]: time="2025-09-13T00:06:49.483522861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56596dc84c-7sb8x,Uid:54ae31d6-2edb-4026-9279-0fba6bb34176,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097\"" Sep 13 00:06:49.502703 containerd[1977]: time="2025-09-13T00:06:49.502660082Z" level=info msg="CreateContainer within sandbox \"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"140e8a93cce318215de4f4535fcc3e8f20329647ac38378eba0a18babca31541\"" Sep 13 00:06:49.503434 containerd[1977]: time="2025-09-13T00:06:49.503369955Z" level=info msg="StartContainer for \"140e8a93cce318215de4f4535fcc3e8f20329647ac38378eba0a18babca31541\"" Sep 13 00:06:49.546757 systemd[1]: Started cri-containerd-140e8a93cce318215de4f4535fcc3e8f20329647ac38378eba0a18babca31541.scope - libcontainer container 140e8a93cce318215de4f4535fcc3e8f20329647ac38378eba0a18babca31541. Sep 13 00:06:49.586449 containerd[1977]: time="2025-09-13T00:06:49.586262391Z" level=info msg="StartContainer for \"140e8a93cce318215de4f4535fcc3e8f20329647ac38378eba0a18babca31541\" returns successfully" Sep 13 00:06:49.825606 systemd-networkd[1903]: cali6260e7fe204: Gained IPv6LL Sep 13 00:06:50.083054 systemd-networkd[1903]: cali687180c5401: Gained IPv6LL Sep 13 00:06:50.145689 systemd-networkd[1903]: calicd4bd986752: Gained IPv6LL Sep 13 00:06:50.721598 systemd-networkd[1903]: cali5e9cd4161ee: Gained IPv6LL Sep 13 00:06:51.914225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737341441.mount: Deactivated successfully. Sep 13 00:06:51.951399 containerd[1977]: time="2025-09-13T00:06:51.950913891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:51.952839 containerd[1977]: time="2025-09-13T00:06:51.952777454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:06:51.954948 containerd[1977]: time="2025-09-13T00:06:51.954914453Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:51.958611 containerd[1977]: time="2025-09-13T00:06:51.958304458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:51.959184 containerd[1977]: time="2025-09-13T00:06:51.959109541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.498331315s" Sep 13 00:06:51.959184 containerd[1977]: time="2025-09-13T00:06:51.959145132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:06:51.961179 containerd[1977]: time="2025-09-13T00:06:51.961152768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:06:51.962456 containerd[1977]: time="2025-09-13T00:06:51.962055249Z" level=info msg="CreateContainer within sandbox \"256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:06:51.987041 containerd[1977]: time="2025-09-13T00:06:51.986997225Z" level=info msg="CreateContainer within sandbox \"256629b8a1665908b81eab60a010bf71f80859dbe66f81debcbb729d3c9f5c78\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6a32f245329fe978d2b5c7047bd8ae9d5a700267db9e5d68e7aeba71a039340d\"" Sep 13 00:06:51.987828 containerd[1977]: time="2025-09-13T00:06:51.987798731Z" level=info msg="StartContainer for \"6a32f245329fe978d2b5c7047bd8ae9d5a700267db9e5d68e7aeba71a039340d\"" Sep 13 00:06:52.041826 systemd[1]: Started cri-containerd-6a32f245329fe978d2b5c7047bd8ae9d5a700267db9e5d68e7aeba71a039340d.scope - libcontainer container 6a32f245329fe978d2b5c7047bd8ae9d5a700267db9e5d68e7aeba71a039340d. Sep 13 00:06:52.132865 containerd[1977]: time="2025-09-13T00:06:52.132807226Z" level=info msg="StartContainer for \"6a32f245329fe978d2b5c7047bd8ae9d5a700267db9e5d68e7aeba71a039340d\" returns successfully" Sep 13 00:06:52.161802 kubelet[3167]: I0913 00:06:52.161727 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-866695b4f-qwdmz" podStartSLOduration=2.055605223 podStartE2EDuration="9.161709592s" podCreationTimestamp="2025-09-13 00:06:43 +0000 UTC" firstStartedPulling="2025-09-13 00:06:44.854370282 +0000 UTC m=+47.693340308" lastFinishedPulling="2025-09-13 00:06:51.960474644 +0000 UTC m=+54.799444677" observedRunningTime="2025-09-13 00:06:52.159638505 +0000 UTC m=+54.998608547" watchObservedRunningTime="2025-09-13 00:06:52.161709592 +0000 UTC m=+55.000679627" Sep 13 00:06:52.569700 systemd[1]: Started sshd@8-172.31.17.100:22-139.178.89.65:33242.service - OpenSSH per-connection server daemon (139.178.89.65:33242). Sep 13 00:06:52.786364 sshd[5858]: Accepted publickey for core from 139.178.89.65 port 33242 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:52.790398 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:52.797962 systemd-logind[1964]: New session 9 of user core. Sep 13 00:06:52.801635 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:06:53.026384 ntpd[1955]: Listen normally on 7 vxlan.calico 192.168.68.192:123 Sep 13 00:06:53.027377 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 7 vxlan.calico 192.168.68.192:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 8 vxlan.calico [fe80::64cd:72ff:fe4a:9233%4]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 9 cali4533068f53e [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 10 calib7518559638 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 11 cali171ab8f48e2 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 12 cali8c6e05d2ed2 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 13 cali6260e7fe204 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 14 cali687180c5401 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 15 calicd4bd986752 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:06:53.028687 ntpd[1955]: 13 Sep 00:06:53 ntpd[1955]: Listen normally on 16 cali5e9cd4161ee [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:06:53.027401 ntpd[1955]: Listen normally on 8 vxlan.calico [fe80::64cd:72ff:fe4a:9233%4]:123 Sep 13 00:06:53.027465 ntpd[1955]: Listen normally on 9 cali4533068f53e [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:06:53.027495 ntpd[1955]: Listen normally on 10 calib7518559638 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:06:53.027526 ntpd[1955]: Listen normally on 11 cali171ab8f48e2 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:06:53.027556 ntpd[1955]: Listen normally on 12 cali8c6e05d2ed2 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:06:53.027615 ntpd[1955]: Listen normally on 13 cali6260e7fe204 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:06:53.027645 ntpd[1955]: Listen normally on 14 cali687180c5401 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:06:53.027672 ntpd[1955]: Listen normally on 15 calicd4bd986752 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:06:53.027714 ntpd[1955]: Listen normally on 16 cali5e9cd4161ee [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:06:53.565568 sshd[5858]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:53.569964 systemd[1]: sshd@8-172.31.17.100:22-139.178.89.65:33242.service: Deactivated successfully. Sep 13 00:06:53.573946 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:06:53.575083 systemd-logind[1964]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:06:53.576273 systemd-logind[1964]: Removed session 9. Sep 13 00:06:56.817839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663343929.mount: Deactivated successfully. Sep 13 00:06:57.611419 containerd[1977]: time="2025-09-13T00:06:57.611370341Z" level=info msg="StopPodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\"" Sep 13 00:06:57.881267 containerd[1977]: time="2025-09-13T00:06:57.881027859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:57.883517 containerd[1977]: time="2025-09-13T00:06:57.883427677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:06:57.887022 containerd[1977]: time="2025-09-13T00:06:57.886770057Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:57.891063 containerd[1977]: time="2025-09-13T00:06:57.891025103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:57.891657 containerd[1977]: time="2025-09-13T00:06:57.891624414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.93044011s" Sep 13 00:06:57.891736 containerd[1977]: time="2025-09-13T00:06:57.891661322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:06:57.895784 containerd[1977]: time="2025-09-13T00:06:57.895740481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:06:57.914738 containerd[1977]: time="2025-09-13T00:06:57.914357364Z" level=info msg="CreateContainer within sandbox \"6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:06:57.948663 containerd[1977]: time="2025-09-13T00:06:57.947996848Z" level=info msg="CreateContainer within sandbox \"6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51\"" Sep 13 00:06:57.952461 containerd[1977]: time="2025-09-13T00:06:57.952367968Z" level=info msg="StartContainer for \"811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51\"" Sep 13 00:06:58.060098 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.6NBinc.mount: Deactivated successfully. Sep 13 00:06:58.067045 systemd[1]: Started cri-containerd-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51.scope - libcontainer container 811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51. Sep 13 00:06:58.147167 containerd[1977]: time="2025-09-13T00:06:58.145103726Z" level=info msg="StartContainer for \"811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51\" returns successfully" Sep 13 00:06:58.480771 kubelet[3167]: I0913 00:06:58.480691 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-z9zb5" podStartSLOduration=30.332349066 podStartE2EDuration="39.459095657s" podCreationTimestamp="2025-09-13 00:06:19 +0000 UTC" firstStartedPulling="2025-09-13 00:06:48.766074239 +0000 UTC m=+51.605044261" lastFinishedPulling="2025-09-13 00:06:57.892820827 +0000 UTC m=+60.731790852" observedRunningTime="2025-09-13 00:06:58.321476219 +0000 UTC m=+61.160446251" watchObservedRunningTime="2025-09-13 00:06:58.459095657 +0000 UTC m=+61.298065698" Sep 13 00:06:58.619640 systemd[1]: Started sshd@9-172.31.17.100:22-139.178.89.65:33250.service - OpenSSH per-connection server daemon (139.178.89.65:33250). Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.097 [WARNING][5898] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85496fec-fe0d-4f47-8c0f-4732dc0de194", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0", Pod:"csi-node-driver-hn4jz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali171ab8f48e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.101 [INFO][5898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.101 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" iface="eth0" netns="" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.101 [INFO][5898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.101 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.594 [INFO][5933] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.599 [INFO][5933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.601 [INFO][5933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.638 [WARNING][5933] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.638 [INFO][5933] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.642 [INFO][5933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:58.648494 containerd[1977]: 2025-09-13 00:06:58.645 [INFO][5898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.649202 containerd[1977]: time="2025-09-13T00:06:58.648532098Z" level=info msg="TearDown network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" successfully" Sep 13 00:06:58.649202 containerd[1977]: time="2025-09-13T00:06:58.648556207Z" level=info msg="StopPodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" returns successfully" Sep 13 00:06:58.739255 containerd[1977]: time="2025-09-13T00:06:58.739153360Z" level=info msg="RemovePodSandbox for \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\"" Sep 13 00:06:58.743026 containerd[1977]: time="2025-09-13T00:06:58.742974934Z" level=info msg="Forcibly stopping sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\"" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.810 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85496fec-fe0d-4f47-8c0f-4732dc0de194", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0", Pod:"csi-node-driver-hn4jz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali171ab8f48e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.810 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.810 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" iface="eth0" netns="" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.811 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.811 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.841 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.841 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.841 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.847 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.847 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" HandleID="k8s-pod-network.de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Workload="ip--172--31--17--100-k8s-csi--node--driver--hn4jz-eth0" Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.849 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:58.853998 containerd[1977]: 2025-09-13 00:06:58.851 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b" Sep 13 00:06:58.854845 containerd[1977]: time="2025-09-13T00:06:58.854064647Z" level=info msg="TearDown network for sandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" successfully" Sep 13 00:06:58.870234 sshd[5976]: Accepted publickey for core from 139.178.89.65 port 33250 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:06:58.874291 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:58.875714 containerd[1977]: time="2025-09-13T00:06:58.874839302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:06:58.881918 systemd-logind[1964]: New session 10 of user core. Sep 13 00:06:58.887283 containerd[1977]: time="2025-09-13T00:06:58.887226712Z" level=info msg="RemovePodSandbox \"de0285cc48c29f3a5aeaea2f7418e31ff406f33a07fe369109d61740ddadaa0b\" returns successfully" Sep 13 00:06:58.889058 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:06:58.903666 containerd[1977]: time="2025-09-13T00:06:58.903629784Z" level=info msg="StopPodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\"" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.943 [WARNING][6013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"54ae31d6-2edb-4026-9279-0fba6bb34176", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097", Pod:"calico-apiserver-56596dc84c-7sb8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e9cd4161ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.943 [INFO][6013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.943 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" iface="eth0" netns="" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.943 [INFO][6013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.943 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.982 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.985 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:58.985 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:59.003 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:59.003 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:59.010 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.019485 containerd[1977]: 2025-09-13 00:06:59.013 [INFO][6013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.023593 containerd[1977]: time="2025-09-13T00:06:59.019535896Z" level=info msg="TearDown network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" successfully" Sep 13 00:06:59.023593 containerd[1977]: time="2025-09-13T00:06:59.020968426Z" level=info msg="StopPodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" returns successfully" Sep 13 00:06:59.023593 containerd[1977]: time="2025-09-13T00:06:59.023045405Z" level=info msg="RemovePodSandbox for \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\"" Sep 13 00:06:59.023593 containerd[1977]: time="2025-09-13T00:06:59.023085351Z" level=info msg="Forcibly stopping sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\"" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.112 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"54ae31d6-2edb-4026-9279-0fba6bb34176", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097", Pod:"calico-apiserver-56596dc84c-7sb8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e9cd4161ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.112 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.112 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" iface="eth0" netns="" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.112 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.112 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.160 [INFO][6047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.162 [INFO][6047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.162 [INFO][6047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.173 [WARNING][6047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.173 [INFO][6047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" HandleID="k8s-pod-network.ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--7sb8x-eth0" Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.176 [INFO][6047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.183043 containerd[1977]: 2025-09-13 00:06:59.179 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b" Sep 13 00:06:59.184634 containerd[1977]: time="2025-09-13T00:06:59.183099867Z" level=info msg="TearDown network for sandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" successfully" Sep 13 00:06:59.189134 containerd[1977]: time="2025-09-13T00:06:59.189073892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:06:59.189369 containerd[1977]: time="2025-09-13T00:06:59.189168824Z" level=info msg="RemovePodSandbox \"ff1ddff18a0da1167c2d095701cd3f20a35338bfff712ec1441f993e3aa2bd8b\" returns successfully" Sep 13 00:06:59.190347 containerd[1977]: time="2025-09-13T00:06:59.190306779Z" level=info msg="StopPodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\"" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.249 [WARNING][6061] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a", Pod:"calico-apiserver-56596dc84c-nl9sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali687180c5401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.250 [INFO][6061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.250 [INFO][6061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" iface="eth0" netns="" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.250 [INFO][6061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.250 [INFO][6061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.286 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.287 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.287 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.303 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.303 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.307 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.333293 containerd[1977]: 2025-09-13 00:06:59.327 [INFO][6061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.333293 containerd[1977]: time="2025-09-13T00:06:59.333104882Z" level=info msg="TearDown network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" successfully" Sep 13 00:06:59.333293 containerd[1977]: time="2025-09-13T00:06:59.333128309Z" level=info msg="StopPodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" returns successfully" Sep 13 00:06:59.337270 containerd[1977]: time="2025-09-13T00:06:59.334340163Z" level=info msg="RemovePodSandbox for \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\"" Sep 13 00:06:59.337270 containerd[1977]: time="2025-09-13T00:06:59.334373047Z" level=info msg="Forcibly stopping sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\"" Sep 13 00:06:59.485713 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.1f6NU5.mount: Deactivated successfully. Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.423 [WARNING][6085] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0", GenerateName:"calico-apiserver-56596dc84c-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf5b23e2-56dc-49fe-9c10-0589cb6cd48a", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56596dc84c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a", Pod:"calico-apiserver-56596dc84c-nl9sx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.68.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali687180c5401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.425 [INFO][6085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.426 [INFO][6085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" iface="eth0" netns="" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.427 [INFO][6085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.427 [INFO][6085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.517 [INFO][6101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.518 [INFO][6101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.518 [INFO][6101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.541 [WARNING][6101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.541 [INFO][6101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" HandleID="k8s-pod-network.f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Workload="ip--172--31--17--100-k8s-calico--apiserver--56596dc84c--nl9sx-eth0" Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.543 [INFO][6101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.552871 containerd[1977]: 2025-09-13 00:06:59.548 [INFO][6085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222" Sep 13 00:06:59.554502 containerd[1977]: time="2025-09-13T00:06:59.553832067Z" level=info msg="TearDown network for sandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" successfully" Sep 13 00:06:59.568726 containerd[1977]: time="2025-09-13T00:06:59.568507080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:06:59.568726 containerd[1977]: time="2025-09-13T00:06:59.568588185Z" level=info msg="RemovePodSandbox \"f326b7b2e0abad06b09424ca944457504e26e940c441a69b8dde7a182165f222\" returns successfully" Sep 13 00:06:59.571923 containerd[1977]: time="2025-09-13T00:06:59.571875686Z" level=info msg="StopPodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\"" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.684 [WARNING][6128] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3ecbb697-c2d6-45f9-b896-7067a982618c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436", Pod:"goldmane-7988f88666-z9zb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c6e05d2ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.684 [INFO][6128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.684 [INFO][6128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" iface="eth0" netns="" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.684 [INFO][6128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.684 [INFO][6128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.722 [INFO][6136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.722 [INFO][6136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.722 [INFO][6136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.732 [WARNING][6136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.732 [INFO][6136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.734 [INFO][6136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.740094 containerd[1977]: 2025-09-13 00:06:59.736 [INFO][6128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.742098 containerd[1977]: time="2025-09-13T00:06:59.740165252Z" level=info msg="TearDown network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" successfully" Sep 13 00:06:59.742098 containerd[1977]: time="2025-09-13T00:06:59.740212757Z" level=info msg="StopPodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" returns successfully" Sep 13 00:06:59.742843 containerd[1977]: time="2025-09-13T00:06:59.742279332Z" level=info msg="RemovePodSandbox for \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\"" Sep 13 00:06:59.742843 containerd[1977]: time="2025-09-13T00:06:59.742327816Z" level=info msg="Forcibly stopping sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\"" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.803 [WARNING][6150] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"3ecbb697-c2d6-45f9-b896-7067a982618c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"6dde568d120392280a396a161edc6cb64d62b1bcc15084827592f45ec5830436", Pod:"goldmane-7988f88666-z9zb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8c6e05d2ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.804 [INFO][6150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.804 [INFO][6150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" iface="eth0" netns="" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.804 [INFO][6150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.804 [INFO][6150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.852 [INFO][6158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.852 [INFO][6158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.852 [INFO][6158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.864 [WARNING][6158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.864 [INFO][6158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" HandleID="k8s-pod-network.8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Workload="ip--172--31--17--100-k8s-goldmane--7988f88666--z9zb5-eth0" Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.866 [INFO][6158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.871740 containerd[1977]: 2025-09-13 00:06:59.869 [INFO][6150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c" Sep 13 00:06:59.873826 containerd[1977]: time="2025-09-13T00:06:59.871815281Z" level=info msg="TearDown network for sandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" successfully" Sep 13 00:06:59.878743 containerd[1977]: time="2025-09-13T00:06:59.878684245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:06:59.878743 containerd[1977]: time="2025-09-13T00:06:59.878749144Z" level=info msg="RemovePodSandbox \"8f63c0f16b117ba3ee0c196e4a4fe6e3e20d3bec946680beb3742e0292296a9c\" returns successfully" Sep 13 00:06:59.879601 containerd[1977]: time="2025-09-13T00:06:59.879239536Z" level=info msg="StopPodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\"" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.930 [WARNING][6172] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.931 [INFO][6172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.931 [INFO][6172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" iface="eth0" netns="" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.931 [INFO][6172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.931 [INFO][6172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.970 [INFO][6179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.971 [INFO][6179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.971 [INFO][6179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.979 [WARNING][6179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.979 [INFO][6179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.982 [INFO][6179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:06:59.987926 containerd[1977]: 2025-09-13 00:06:59.984 [INFO][6172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:06:59.989517 containerd[1977]: time="2025-09-13T00:06:59.988578073Z" level=info msg="TearDown network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" successfully" Sep 13 00:06:59.989517 containerd[1977]: time="2025-09-13T00:06:59.988614345Z" level=info msg="StopPodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" returns successfully" Sep 13 00:06:59.989847 containerd[1977]: time="2025-09-13T00:06:59.989656966Z" level=info msg="RemovePodSandbox for \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\"" Sep 13 00:06:59.989847 containerd[1977]: time="2025-09-13T00:06:59.989693266Z" level=info msg="Forcibly stopping sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\"" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.055 [WARNING][6193] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" WorkloadEndpoint="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.055 [INFO][6193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.055 [INFO][6193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" iface="eth0" netns="" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.055 [INFO][6193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.055 [INFO][6193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.094 [INFO][6200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.095 [INFO][6200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.095 [INFO][6200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.104 [WARNING][6200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.104 [INFO][6200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" HandleID="k8s-pod-network.e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Workload="ip--172--31--17--100-k8s-whisker--6f5dc47898--hfqln-eth0" Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.110 [INFO][6200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:00.118018 containerd[1977]: 2025-09-13 00:07:00.112 [INFO][6193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de" Sep 13 00:07:00.118018 containerd[1977]: time="2025-09-13T00:07:00.115988133Z" level=info msg="TearDown network for sandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" successfully" Sep 13 00:07:00.132583 containerd[1977]: time="2025-09-13T00:07:00.131387551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:00.132583 containerd[1977]: time="2025-09-13T00:07:00.131523334Z" level=info msg="RemovePodSandbox \"e4422306409328c9e5cfa654439c304ecf0cf67a536e3510b5a0af328b6296de\" returns successfully" Sep 13 00:07:00.132805 containerd[1977]: time="2025-09-13T00:07:00.132744042Z" level=info msg="StopPodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\"" Sep 13 00:07:00.206023 sshd[5976]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:00.213214 systemd[1]: sshd@9-172.31.17.100:22-139.178.89.65:33250.service: Deactivated successfully. Sep 13 00:07:00.217089 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:07:00.220507 systemd-logind[1964]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:07:00.223377 systemd-logind[1964]: Removed session 10. Sep 13 00:07:00.249488 systemd[1]: Started sshd@10-172.31.17.100:22-139.178.89.65:53258.service - OpenSSH per-connection server daemon (139.178.89.65:53258). Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.193 [WARNING][6215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41318bb3-412c-4e63-8cef-3f95f4d6ba6e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83", Pod:"coredns-7c65d6cfc9-6hgsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4533068f53e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.193 [INFO][6215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.193 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" iface="eth0" netns="" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.193 [INFO][6215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.193 [INFO][6215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.235 [INFO][6222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.235 [INFO][6222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.235 [INFO][6222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.246 [WARNING][6222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.246 [INFO][6222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.252 [INFO][6222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:00.261029 containerd[1977]: 2025-09-13 00:07:00.256 [INFO][6215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.261737 containerd[1977]: time="2025-09-13T00:07:00.261092289Z" level=info msg="TearDown network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" successfully" Sep 13 00:07:00.261737 containerd[1977]: time="2025-09-13T00:07:00.261139580Z" level=info msg="StopPodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" returns successfully" Sep 13 00:07:00.275166 containerd[1977]: time="2025-09-13T00:07:00.275122904Z" level=info msg="RemovePodSandbox for \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\"" Sep 13 00:07:00.275166 containerd[1977]: time="2025-09-13T00:07:00.275167899Z" level=info msg="Forcibly stopping sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\"" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.341 [WARNING][6241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41318bb3-412c-4e63-8cef-3f95f4d6ba6e", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"8ebae245d87a7859187f9c50083cf882352ea99827c9fdc57053631c211d5d83", Pod:"coredns-7c65d6cfc9-6hgsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4533068f53e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.342 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.342 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" iface="eth0" netns="" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.342 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.342 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.388 [INFO][6250] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.388 [INFO][6250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.388 [INFO][6250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.398 [WARNING][6250] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.399 [INFO][6250] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" HandleID="k8s-pod-network.87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--6hgsg-eth0" Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.400 [INFO][6250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:00.408049 containerd[1977]: 2025-09-13 00:07:00.403 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa" Sep 13 00:07:00.409194 containerd[1977]: time="2025-09-13T00:07:00.408553068Z" level=info msg="TearDown network for sandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" successfully" Sep 13 00:07:00.415574 containerd[1977]: time="2025-09-13T00:07:00.415346330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:00.415574 containerd[1977]: time="2025-09-13T00:07:00.415467872Z" level=info msg="RemovePodSandbox \"87c715d43b5a7f57b362bede85bedd4d4c694aa66a35201286b31e49475193fa\" returns successfully" Sep 13 00:07:00.416091 containerd[1977]: time="2025-09-13T00:07:00.416057194Z" level=info msg="StopPodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\"" Sep 13 00:07:00.460945 sshd[6231]: Accepted publickey for core from 139.178.89.65 port 53258 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:00.463633 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:00.475501 systemd-logind[1964]: New session 11 of user core. Sep 13 00:07:00.478496 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:07:00.507200 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.vMAQTv.mount: Deactivated successfully. Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.455 [WARNING][6265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ac2a539b-0e55-4a98-8350-b301bacb6e1b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14", Pod:"coredns-7c65d6cfc9-lbdr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6260e7fe204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.455 [INFO][6265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.455 [INFO][6265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" iface="eth0" netns="" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.455 [INFO][6265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.455 [INFO][6265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.497 [INFO][6272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.498 [INFO][6272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.498 [INFO][6272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.512 [WARNING][6272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.512 [INFO][6272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.514 [INFO][6272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:00.520937 containerd[1977]: 2025-09-13 00:07:00.517 [INFO][6265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.520937 containerd[1977]: time="2025-09-13T00:07:00.520767488Z" level=info msg="TearDown network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" successfully" Sep 13 00:07:00.520937 containerd[1977]: time="2025-09-13T00:07:00.520814829Z" level=info msg="StopPodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" returns successfully" Sep 13 00:07:00.522705 containerd[1977]: time="2025-09-13T00:07:00.521875312Z" level=info msg="RemovePodSandbox for \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\"" Sep 13 00:07:00.522705 containerd[1977]: time="2025-09-13T00:07:00.521901817Z" level=info msg="Forcibly stopping sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\"" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.605 [WARNING][6301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ac2a539b-0e55-4a98-8350-b301bacb6e1b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"603069e753416802c1d2a07e61a8a1fe82a3c8e8ef83ec27f310286c7a11dd14", Pod:"coredns-7c65d6cfc9-lbdr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.68.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6260e7fe204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.605 [INFO][6301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.605 [INFO][6301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" iface="eth0" netns="" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.605 [INFO][6301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.605 [INFO][6301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.652 [INFO][6316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.653 [INFO][6316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.653 [INFO][6316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.664 [WARNING][6316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.664 [INFO][6316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" HandleID="k8s-pod-network.dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Workload="ip--172--31--17--100-k8s-coredns--7c65d6cfc9--lbdr4-eth0" Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.668 [INFO][6316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:00.771023 containerd[1977]: 2025-09-13 00:07:00.676 [INFO][6301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f" Sep 13 00:07:00.772960 containerd[1977]: time="2025-09-13T00:07:00.770987907Z" level=info msg="TearDown network for sandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" successfully" Sep 13 00:07:00.813181 containerd[1977]: time="2025-09-13T00:07:00.813067617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:00.813314 containerd[1977]: time="2025-09-13T00:07:00.813220517Z" level=info msg="RemovePodSandbox \"dae5c75ddd0e8bdf436b3dba9726a3f8c5ca4a47e6ef4d71602cdba4f1cc950f\" returns successfully" Sep 13 00:07:00.815905 containerd[1977]: time="2025-09-13T00:07:00.815816746Z" level=info msg="StopPodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\"" Sep 13 00:07:01.093762 sshd[6231]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:01.126900 systemd[1]: sshd@10-172.31.17.100:22-139.178.89.65:53258.service: Deactivated successfully. Sep 13 00:07:01.134117 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:07:01.150701 systemd-logind[1964]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:07:01.157888 systemd[1]: Started sshd@11-172.31.17.100:22-139.178.89.65:53268.service - OpenSSH per-connection server daemon (139.178.89.65:53268). Sep 13 00:07:01.168565 systemd-logind[1964]: Removed session 11. Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:00.976 [WARNING][6332] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0", GenerateName:"calico-kube-controllers-58df989bd8-", Namespace:"calico-system", SelfLink:"", UID:"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58df989bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc", Pod:"calico-kube-controllers-58df989bd8-4mk2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicd4bd986752", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:00.984 [INFO][6332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:00.984 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" iface="eth0" netns="" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:00.984 [INFO][6332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:00.984 [INFO][6332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.145 [INFO][6340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.163 [INFO][6340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.163 [INFO][6340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.238 [WARNING][6340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.238 [INFO][6340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.261 [INFO][6340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:01.280658 containerd[1977]: 2025-09-13 00:07:01.265 [INFO][6332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:01.280658 containerd[1977]: time="2025-09-13T00:07:01.280643782Z" level=info msg="TearDown network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" successfully" Sep 13 00:07:01.282603 containerd[1977]: time="2025-09-13T00:07:01.280680043Z" level=info msg="StopPodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" returns successfully" Sep 13 00:07:01.355383 containerd[1977]: time="2025-09-13T00:07:01.354834783Z" level=info msg="RemovePodSandbox for \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\"" Sep 13 00:07:01.355383 containerd[1977]: time="2025-09-13T00:07:01.354874528Z" level=info msg="Forcibly stopping sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\"" Sep 13 00:07:01.579429 sshd[6350]: Accepted publickey for core from 139.178.89.65 port 53268 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:01.600088 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:01.630785 systemd-logind[1964]: New session 12 of user core. Sep 13 00:07:01.646679 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.701 [WARNING][6360] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0", GenerateName:"calico-kube-controllers-58df989bd8-", Namespace:"calico-system", SelfLink:"", UID:"c459cf7c-3ec3-4406-97f9-a4a7ecf9e253", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58df989bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-100", ContainerID:"172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc", Pod:"calico-kube-controllers-58df989bd8-4mk2j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.68.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicd4bd986752", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.711 [INFO][6360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.712 [INFO][6360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" iface="eth0" netns="" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.712 [INFO][6360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.712 [INFO][6360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.926 [INFO][6368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.926 [INFO][6368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.926 [INFO][6368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.955 [WARNING][6368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.972 [INFO][6368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" HandleID="k8s-pod-network.050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Workload="ip--172--31--17--100-k8s-calico--kube--controllers--58df989bd8--4mk2j-eth0" Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:01.997 [INFO][6368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:02.039000 containerd[1977]: 2025-09-13 00:07:02.024 [INFO][6360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915" Sep 13 00:07:02.040585 containerd[1977]: time="2025-09-13T00:07:02.037241683Z" level=info msg="TearDown network for sandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" successfully" Sep 13 00:07:02.085503 containerd[1977]: time="2025-09-13T00:07:02.084987434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:02.085503 containerd[1977]: time="2025-09-13T00:07:02.085078625Z" level=info msg="RemovePodSandbox \"050c9023964b2714072687ae8e7a9a4b32a41b68fb2ca1e5cb46881146c78915\" returns successfully" Sep 13 00:07:02.579732 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.XNZmhM.mount: Deactivated successfully. Sep 13 00:07:02.898560 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.UDppLE.mount: Deactivated successfully. Sep 13 00:07:03.483329 sshd[6350]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:03.502829 systemd[1]: sshd@11-172.31.17.100:22-139.178.89.65:53268.service: Deactivated successfully. Sep 13 00:07:03.508383 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:07:03.511026 systemd-logind[1964]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:07:03.517934 systemd-logind[1964]: Removed session 12. Sep 13 00:07:04.975402 containerd[1977]: time="2025-09-13T00:07:04.873657186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:07:04.979908 containerd[1977]: time="2025-09-13T00:07:04.975271014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:05.060206 containerd[1977]: time="2025-09-13T00:07:05.060152403Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:05.066367 containerd[1977]: time="2025-09-13T00:07:05.065489284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:05.066367 containerd[1977]: time="2025-09-13T00:07:05.066010802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 7.170228558s" Sep 13 00:07:05.066367 containerd[1977]: time="2025-09-13T00:07:05.066060269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:07:05.247849 containerd[1977]: time="2025-09-13T00:07:05.247350744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:07:05.288547 containerd[1977]: time="2025-09-13T00:07:05.285000430Z" level=info msg="CreateContainer within sandbox \"3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:07:05.438873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963132563.mount: Deactivated successfully. Sep 13 00:07:05.577699 containerd[1977]: time="2025-09-13T00:07:05.577562685Z" level=info msg="CreateContainer within sandbox \"3ec758cacb838e4062b540dc8bc56e5f140f5f823d352a4ab618d706ac04017a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0b28f22bee9646306644a6be5be8f50becb3ccca1e1d54f70a5de4440b10b80\"" Sep 13 00:07:05.583742 containerd[1977]: time="2025-09-13T00:07:05.580986480Z" level=info msg="StartContainer for \"c0b28f22bee9646306644a6be5be8f50becb3ccca1e1d54f70a5de4440b10b80\"" Sep 13 00:07:05.971700 systemd[1]: Started cri-containerd-c0b28f22bee9646306644a6be5be8f50becb3ccca1e1d54f70a5de4440b10b80.scope - libcontainer container c0b28f22bee9646306644a6be5be8f50becb3ccca1e1d54f70a5de4440b10b80. Sep 13 00:07:06.090863 containerd[1977]: time="2025-09-13T00:07:06.090811561Z" level=info msg="StartContainer for \"c0b28f22bee9646306644a6be5be8f50becb3ccca1e1d54f70a5de4440b10b80\" returns successfully" Sep 13 00:07:06.326006 kubelet[3167]: I0913 00:07:06.309768 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56596dc84c-nl9sx" podStartSLOduration=34.906226036 podStartE2EDuration="51.280060061s" podCreationTimestamp="2025-09-13 00:06:15 +0000 UTC" firstStartedPulling="2025-09-13 00:06:48.788505855 +0000 UTC m=+51.627475889" lastFinishedPulling="2025-09-13 00:07:05.162339893 +0000 UTC m=+68.001309914" observedRunningTime="2025-09-13 00:07:06.279393988 +0000 UTC m=+69.118364030" watchObservedRunningTime="2025-09-13 00:07:06.280060061 +0000 UTC m=+69.119030104" Sep 13 00:07:08.541896 systemd[1]: Started sshd@12-172.31.17.100:22-139.178.89.65:53280.service - OpenSSH per-connection server daemon (139.178.89.65:53280). Sep 13 00:07:08.832447 sshd[6512]: Accepted publickey for core from 139.178.89.65 port 53280 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:08.837720 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:08.860366 systemd-logind[1964]: New session 13 of user core. Sep 13 00:07:08.865622 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:07:10.235997 sshd[6512]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:10.244347 systemd[1]: sshd@12-172.31.17.100:22-139.178.89.65:53280.service: Deactivated successfully. Sep 13 00:07:10.249917 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:07:10.252268 systemd-logind[1964]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:07:10.254524 systemd-logind[1964]: Removed session 13. Sep 13 00:07:10.810315 containerd[1977]: time="2025-09-13T00:07:10.810262227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:10.813445 containerd[1977]: time="2025-09-13T00:07:10.813367363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:07:10.814530 containerd[1977]: time="2025-09-13T00:07:10.813924460Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:10.817356 containerd[1977]: time="2025-09-13T00:07:10.817299111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:10.820832 containerd[1977]: time="2025-09-13T00:07:10.820796772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.572690527s" Sep 13 00:07:10.821522 containerd[1977]: time="2025-09-13T00:07:10.820932618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:07:10.838661 containerd[1977]: time="2025-09-13T00:07:10.838625362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:07:11.084922 containerd[1977]: time="2025-09-13T00:07:11.084787551Z" level=info msg="CreateContainer within sandbox \"172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:07:11.155129 containerd[1977]: time="2025-09-13T00:07:11.155071168Z" level=info msg="CreateContainer within sandbox \"172dd839d2ff45ad6ab9a37917636715f809b09d8354ad879cc8da668e528adc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b\"" Sep 13 00:07:11.158721 containerd[1977]: time="2025-09-13T00:07:11.158691132Z" level=info msg="StartContainer for \"3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b\"" Sep 13 00:07:11.209654 containerd[1977]: time="2025-09-13T00:07:11.208610980Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:11.221271 containerd[1977]: time="2025-09-13T00:07:11.221196166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:07:11.224966 containerd[1977]: time="2025-09-13T00:07:11.224840369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 385.915423ms" Sep 13 00:07:11.224966 containerd[1977]: time="2025-09-13T00:07:11.224880785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:07:11.228136 containerd[1977]: time="2025-09-13T00:07:11.227626451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:07:11.234118 containerd[1977]: time="2025-09-13T00:07:11.234021984Z" level=info msg="CreateContainer within sandbox \"acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:07:11.354302 containerd[1977]: time="2025-09-13T00:07:11.354099057Z" level=info msg="CreateContainer within sandbox \"acb2a3b67370734f4c9bf68d83d99a03d5d47d27ae08a5ad4298fd558a21d097\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0611815f794a660cbfca5c80f7a0662576ad05d1db222c4819a62a4675edb8a3\"" Sep 13 00:07:11.360313 containerd[1977]: time="2025-09-13T00:07:11.359497488Z" level=info msg="StartContainer for \"0611815f794a660cbfca5c80f7a0662576ad05d1db222c4819a62a4675edb8a3\"" Sep 13 00:07:11.389832 systemd[1]: Started cri-containerd-3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b.scope - libcontainer container 3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b. Sep 13 00:07:11.432647 systemd[1]: Started cri-containerd-0611815f794a660cbfca5c80f7a0662576ad05d1db222c4819a62a4675edb8a3.scope - libcontainer container 0611815f794a660cbfca5c80f7a0662576ad05d1db222c4819a62a4675edb8a3. Sep 13 00:07:11.524720 containerd[1977]: time="2025-09-13T00:07:11.524667654Z" level=info msg="StartContainer for \"3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b\" returns successfully" Sep 13 00:07:11.561301 containerd[1977]: time="2025-09-13T00:07:11.558971517Z" level=info msg="StartContainer for \"0611815f794a660cbfca5c80f7a0662576ad05d1db222c4819a62a4675edb8a3\" returns successfully" Sep 13 00:07:12.565198 kubelet[3167]: I0913 00:07:12.564398 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58df989bd8-4mk2j" podStartSLOduration=30.978345678 podStartE2EDuration="52.524864071s" podCreationTimestamp="2025-09-13 00:06:20 +0000 UTC" firstStartedPulling="2025-09-13 00:06:49.285512238 +0000 UTC m=+52.124482268" lastFinishedPulling="2025-09-13 00:07:10.832030625 +0000 UTC m=+73.671000661" observedRunningTime="2025-09-13 00:07:12.462483443 +0000 UTC m=+75.301453482" watchObservedRunningTime="2025-09-13 00:07:12.524864071 +0000 UTC m=+75.363834115" Sep 13 00:07:12.569253 kubelet[3167]: I0913 00:07:12.568535 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56596dc84c-7sb8x" podStartSLOduration=35.833102504 podStartE2EDuration="57.568515038s" podCreationTimestamp="2025-09-13 00:06:15 +0000 UTC" firstStartedPulling="2025-09-13 00:06:49.492014761 +0000 UTC m=+52.330984794" lastFinishedPulling="2025-09-13 00:07:11.227427307 +0000 UTC m=+74.066397328" observedRunningTime="2025-09-13 00:07:12.566245278 +0000 UTC m=+75.405215314" watchObservedRunningTime="2025-09-13 00:07:12.568515038 +0000 UTC m=+75.407485082" Sep 13 00:07:13.277541 containerd[1977]: time="2025-09-13T00:07:13.277497206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:13.280702 containerd[1977]: time="2025-09-13T00:07:13.278856611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:07:13.282782 containerd[1977]: time="2025-09-13T00:07:13.282673613Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:13.290205 containerd[1977]: time="2025-09-13T00:07:13.290063802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:13.293453 containerd[1977]: time="2025-09-13T00:07:13.293395181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.065716998s" Sep 13 00:07:13.293453 containerd[1977]: time="2025-09-13T00:07:13.293460064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:07:13.303399 containerd[1977]: time="2025-09-13T00:07:13.302326626Z" level=info msg="CreateContainer within sandbox \"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:07:13.340248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171933683.mount: Deactivated successfully. Sep 13 00:07:13.399628 containerd[1977]: time="2025-09-13T00:07:13.399485052Z" level=info msg="CreateContainer within sandbox \"1c4d2740afecb90014dfe3d810e34c32233c02786ed956abd6dda49f15cae1b0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6aa1f1f67715233653e60ed33876dc73cb34ec122648a1c20f22e0ae380d318a\"" Sep 13 00:07:13.401542 containerd[1977]: time="2025-09-13T00:07:13.400572080Z" level=info msg="StartContainer for \"6aa1f1f67715233653e60ed33876dc73cb34ec122648a1c20f22e0ae380d318a\"" Sep 13 00:07:13.505667 systemd[1]: Started cri-containerd-6aa1f1f67715233653e60ed33876dc73cb34ec122648a1c20f22e0ae380d318a.scope - libcontainer container 6aa1f1f67715233653e60ed33876dc73cb34ec122648a1c20f22e0ae380d318a. Sep 13 00:07:13.608829 containerd[1977]: time="2025-09-13T00:07:13.608603555Z" level=info msg="StartContainer for \"6aa1f1f67715233653e60ed33876dc73cb34ec122648a1c20f22e0ae380d318a\" returns successfully" Sep 13 00:07:14.865532 kubelet[3167]: I0913 00:07:14.862645 3167 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:07:14.870432 kubelet[3167]: I0913 00:07:14.870360 3167 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:07:15.291790 systemd[1]: Started sshd@13-172.31.17.100:22-139.178.89.65:46892.service - OpenSSH per-connection server daemon (139.178.89.65:46892). Sep 13 00:07:15.535838 sshd[6681]: Accepted publickey for core from 139.178.89.65 port 46892 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:15.539176 sshd[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:15.544927 systemd-logind[1964]: New session 14 of user core. Sep 13 00:07:15.550631 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:07:16.701902 sshd[6681]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:16.708085 systemd[1]: sshd@13-172.31.17.100:22-139.178.89.65:46892.service: Deactivated successfully. Sep 13 00:07:16.711925 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:07:16.714690 systemd-logind[1964]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:07:16.718098 systemd-logind[1964]: Removed session 14. Sep 13 00:07:21.744379 systemd[1]: Started sshd@14-172.31.17.100:22-139.178.89.65:43260.service - OpenSSH per-connection server daemon (139.178.89.65:43260). Sep 13 00:07:22.032910 sshd[6695]: Accepted publickey for core from 139.178.89.65 port 43260 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:22.035540 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:22.041584 systemd-logind[1964]: New session 15 of user core. Sep 13 00:07:22.046769 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:07:22.763988 sshd[6695]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:22.775702 systemd[1]: sshd@14-172.31.17.100:22-139.178.89.65:43260.service: Deactivated successfully. Sep 13 00:07:22.778566 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:07:22.780099 systemd-logind[1964]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:07:22.796345 systemd[1]: Started sshd@15-172.31.17.100:22-139.178.89.65:43270.service - OpenSSH per-connection server daemon (139.178.89.65:43270). Sep 13 00:07:22.800571 systemd-logind[1964]: Removed session 15. Sep 13 00:07:22.973448 sshd[6708]: Accepted publickey for core from 139.178.89.65 port 43270 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:22.974834 sshd[6708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:22.979838 systemd-logind[1964]: New session 16 of user core. Sep 13 00:07:22.986622 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:07:26.568593 sshd[6708]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:26.572955 systemd[1]: sshd@15-172.31.17.100:22-139.178.89.65:43270.service: Deactivated successfully. Sep 13 00:07:26.575082 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:07:26.576587 systemd-logind[1964]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:07:26.577861 systemd-logind[1964]: Removed session 16. Sep 13 00:07:26.602734 systemd[1]: Started sshd@16-172.31.17.100:22-139.178.89.65:43274.service - OpenSSH per-connection server daemon (139.178.89.65:43274). Sep 13 00:07:26.804329 sshd[6725]: Accepted publickey for core from 139.178.89.65 port 43274 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:26.805682 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:26.811592 systemd-logind[1964]: New session 17 of user core. Sep 13 00:07:26.815633 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:07:29.604506 sshd[6725]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:29.629584 systemd[1]: sshd@16-172.31.17.100:22-139.178.89.65:43274.service: Deactivated successfully. Sep 13 00:07:29.634837 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:07:29.641232 systemd-logind[1964]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:07:29.663163 systemd[1]: Started sshd@17-172.31.17.100:22-139.178.89.65:43282.service - OpenSSH per-connection server daemon (139.178.89.65:43282). Sep 13 00:07:29.670898 systemd-logind[1964]: Removed session 17. Sep 13 00:07:29.969761 sshd[6751]: Accepted publickey for core from 139.178.89.65 port 43282 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:29.971362 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:29.980964 systemd-logind[1964]: New session 18 of user core. Sep 13 00:07:29.985784 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:07:31.123457 sshd[6751]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:31.157740 systemd[1]: sshd@17-172.31.17.100:22-139.178.89.65:43282.service: Deactivated successfully. Sep 13 00:07:31.161169 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:07:31.167499 systemd-logind[1964]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:07:31.176753 systemd[1]: Started sshd@18-172.31.17.100:22-139.178.89.65:57942.service - OpenSSH per-connection server daemon (139.178.89.65:57942). Sep 13 00:07:31.182574 systemd-logind[1964]: Removed session 18. Sep 13 00:07:31.399349 sshd[6766]: Accepted publickey for core from 139.178.89.65 port 57942 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:31.403175 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:31.412493 systemd-logind[1964]: New session 19 of user core. Sep 13 00:07:31.418078 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:07:31.957229 sshd[6766]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:31.962322 systemd[1]: sshd@18-172.31.17.100:22-139.178.89.65:57942.service: Deactivated successfully. Sep 13 00:07:31.965049 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:07:31.969204 systemd-logind[1964]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:07:31.970775 systemd-logind[1964]: Removed session 19. Sep 13 00:07:32.438344 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.p813kP.mount: Deactivated successfully. Sep 13 00:07:37.074815 systemd[1]: Started sshd@19-172.31.17.100:22-139.178.89.65:57958.service - OpenSSH per-connection server daemon (139.178.89.65:57958). Sep 13 00:07:37.430449 sshd[6842]: Accepted publickey for core from 139.178.89.65 port 57958 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:37.435682 sshd[6842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:37.463512 systemd-logind[1964]: New session 20 of user core. Sep 13 00:07:37.471992 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:07:38.965657 sshd[6842]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:38.971539 systemd[1]: sshd@19-172.31.17.100:22-139.178.89.65:57958.service: Deactivated successfully. Sep 13 00:07:38.975315 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:07:38.977328 systemd-logind[1964]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:07:38.980416 systemd-logind[1964]: Removed session 20. Sep 13 00:07:44.006833 systemd[1]: Started sshd@20-172.31.17.100:22-139.178.89.65:39760.service - OpenSSH per-connection server daemon (139.178.89.65:39760). Sep 13 00:07:44.355965 sshd[6855]: Accepted publickey for core from 139.178.89.65 port 39760 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:44.358372 sshd[6855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:44.372560 systemd-logind[1964]: New session 21 of user core. Sep 13 00:07:44.377042 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:07:45.652727 sshd[6855]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:45.658693 systemd[1]: sshd@20-172.31.17.100:22-139.178.89.65:39760.service: Deactivated successfully. Sep 13 00:07:45.665283 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:07:45.668843 systemd-logind[1964]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:07:45.671297 systemd-logind[1964]: Removed session 21. Sep 13 00:07:50.697625 systemd[1]: Started sshd@21-172.31.17.100:22-139.178.89.65:37252.service - OpenSSH per-connection server daemon (139.178.89.65:37252). Sep 13 00:07:50.961390 sshd[6870]: Accepted publickey for core from 139.178.89.65 port 37252 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:50.963135 sshd[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:50.972375 systemd-logind[1964]: New session 22 of user core. Sep 13 00:07:50.977627 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:07:51.428572 sshd[6870]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:51.432716 systemd-logind[1964]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:07:51.433618 systemd[1]: sshd@21-172.31.17.100:22-139.178.89.65:37252.service: Deactivated successfully. Sep 13 00:07:51.435688 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:07:51.436976 systemd-logind[1964]: Removed session 22. Sep 13 00:07:56.465821 systemd[1]: Started sshd@22-172.31.17.100:22-139.178.89.65:37254.service - OpenSSH per-connection server daemon (139.178.89.65:37254). Sep 13 00:07:56.786585 sshd[6883]: Accepted publickey for core from 139.178.89.65 port 37254 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:56.788786 sshd[6883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:56.798838 systemd-logind[1964]: New session 23 of user core. Sep 13 00:07:56.807651 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:07:57.405441 sshd[6883]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:57.426149 systemd[1]: sshd@22-172.31.17.100:22-139.178.89.65:37254.service: Deactivated successfully. Sep 13 00:07:57.434090 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:07:57.439238 systemd-logind[1964]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:07:57.441171 systemd-logind[1964]: Removed session 23. Sep 13 00:08:02.467857 systemd[1]: Started sshd@23-172.31.17.100:22-139.178.89.65:50968.service - OpenSSH per-connection server daemon (139.178.89.65:50968). Sep 13 00:08:02.543963 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.aHeAad.mount: Deactivated successfully. Sep 13 00:08:02.865519 sshd[6924]: Accepted publickey for core from 139.178.89.65 port 50968 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:02.868898 sshd[6924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:02.882363 systemd-logind[1964]: New session 24 of user core. Sep 13 00:08:02.887734 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:08:03.184927 systemd[1]: run-containerd-runc-k8s.io-2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76-runc.Eju73F.mount: Deactivated successfully. Sep 13 00:08:05.100315 sshd[6924]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:05.148895 systemd[1]: sshd@23-172.31.17.100:22-139.178.89.65:50968.service: Deactivated successfully. Sep 13 00:08:05.152519 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:08:05.154286 systemd-logind[1964]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:08:05.161493 systemd-logind[1964]: Removed session 24. Sep 13 00:08:07.055518 systemd[1]: run-containerd-runc-k8s.io-3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b-runc.NZY38c.mount: Deactivated successfully. Sep 13 00:08:18.781875 systemd[1]: cri-containerd-5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8.scope: Deactivated successfully. Sep 13 00:08:18.782652 systemd[1]: cri-containerd-5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8.scope: Consumed 4.035s CPU time, 23.9M memory peak, 0B memory swap peak. Sep 13 00:08:19.011832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8-rootfs.mount: Deactivated successfully. Sep 13 00:08:19.071345 containerd[1977]: time="2025-09-13T00:08:19.041972530Z" level=info msg="shim disconnected" id=5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8 namespace=k8s.io Sep 13 00:08:19.071345 containerd[1977]: time="2025-09-13T00:08:19.071270564Z" level=warning msg="cleaning up after shim disconnected" id=5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8 namespace=k8s.io Sep 13 00:08:19.071345 containerd[1977]: time="2025-09-13T00:08:19.071287677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:19.452865 kubelet[3167]: I0913 00:08:19.451068 3167 scope.go:117] "RemoveContainer" containerID="5cf9b3eca5375addc0500dbcecf12ec00000312fbf60fbe717f942b1a941e7c8" Sep 13 00:08:19.505648 containerd[1977]: time="2025-09-13T00:08:19.505581288Z" level=info msg="CreateContainer within sandbox \"0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:08:19.621432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583093802.mount: Deactivated successfully. Sep 13 00:08:19.625917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119409785.mount: Deactivated successfully. Sep 13 00:08:19.656989 containerd[1977]: time="2025-09-13T00:08:19.656924538Z" level=info msg="CreateContainer within sandbox \"0a90505030db100fa8e18dfef33236787e1d1f2a2108e7d8538f7f3d41260e54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7d6a2543138e74861c27ba3b178bc539530c958d45fe5408bee0687969a7c429\"" Sep 13 00:08:19.657507 containerd[1977]: time="2025-09-13T00:08:19.657476531Z" level=info msg="StartContainer for \"7d6a2543138e74861c27ba3b178bc539530c958d45fe5408bee0687969a7c429\"" Sep 13 00:08:19.727627 systemd[1]: Started cri-containerd-7d6a2543138e74861c27ba3b178bc539530c958d45fe5408bee0687969a7c429.scope - libcontainer container 7d6a2543138e74861c27ba3b178bc539530c958d45fe5408bee0687969a7c429. Sep 13 00:08:19.789362 containerd[1977]: time="2025-09-13T00:08:19.789304363Z" level=info msg="StartContainer for \"7d6a2543138e74861c27ba3b178bc539530c958d45fe5408bee0687969a7c429\" returns successfully" Sep 13 00:08:20.178334 systemd[1]: cri-containerd-d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3.scope: Deactivated successfully. Sep 13 00:08:20.178669 systemd[1]: cri-containerd-d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3.scope: Consumed 12.913s CPU time. Sep 13 00:08:20.207016 containerd[1977]: time="2025-09-13T00:08:20.206945568Z" level=info msg="shim disconnected" id=d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3 namespace=k8s.io Sep 13 00:08:20.208449 containerd[1977]: time="2025-09-13T00:08:20.207525542Z" level=warning msg="cleaning up after shim disconnected" id=d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3 namespace=k8s.io Sep 13 00:08:20.208449 containerd[1977]: time="2025-09-13T00:08:20.207555219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:20.207813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3-rootfs.mount: Deactivated successfully. Sep 13 00:08:20.251807 containerd[1977]: time="2025-09-13T00:08:20.251590261Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:08:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:08:20.431638 kubelet[3167]: I0913 00:08:20.430560 3167 scope.go:117] "RemoveContainer" containerID="d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3" Sep 13 00:08:20.433938 containerd[1977]: time="2025-09-13T00:08:20.433907435Z" level=info msg="CreateContainer within sandbox \"bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:08:20.470578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674628144.mount: Deactivated successfully. Sep 13 00:08:20.472112 containerd[1977]: time="2025-09-13T00:08:20.472069540Z" level=info msg="CreateContainer within sandbox \"bce752d237402a15fd3fef006ec750965c217e4e75bde39b27875fa0b453135a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71\"" Sep 13 00:08:20.473437 containerd[1977]: time="2025-09-13T00:08:20.472963547Z" level=info msg="StartContainer for \"69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71\"" Sep 13 00:08:20.519232 systemd[1]: Started cri-containerd-69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71.scope - libcontainer container 69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71. Sep 13 00:08:20.629435 containerd[1977]: time="2025-09-13T00:08:20.628212833Z" level=info msg="StartContainer for \"69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71\" returns successfully" Sep 13 00:08:21.056270 kubelet[3167]: E0913 00:08:21.054641 3167 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-100?timeout=10s\": context deadline exceeded" Sep 13 00:08:24.709160 systemd[1]: cri-containerd-33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857.scope: Deactivated successfully. Sep 13 00:08:24.709393 systemd[1]: cri-containerd-33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857.scope: Consumed 2.125s CPU time, 20.6M memory peak, 0B memory swap peak. Sep 13 00:08:24.738175 containerd[1977]: time="2025-09-13T00:08:24.738066094Z" level=info msg="shim disconnected" id=33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857 namespace=k8s.io Sep 13 00:08:24.739918 containerd[1977]: time="2025-09-13T00:08:24.739455223Z" level=warning msg="cleaning up after shim disconnected" id=33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857 namespace=k8s.io Sep 13 00:08:24.739918 containerd[1977]: time="2025-09-13T00:08:24.739513737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:24.741075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857-rootfs.mount: Deactivated successfully. Sep 13 00:08:25.460808 kubelet[3167]: I0913 00:08:25.460775 3167 scope.go:117] "RemoveContainer" containerID="33c0e25ba89366d0d0d3db393267dd969bf6048e147bd2efac21cdb182f8d857" Sep 13 00:08:25.463018 containerd[1977]: time="2025-09-13T00:08:25.462973885Z" level=info msg="CreateContainer within sandbox \"34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:08:25.486817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152068782.mount: Deactivated successfully. Sep 13 00:08:25.488107 containerd[1977]: time="2025-09-13T00:08:25.488059799Z" level=info msg="CreateContainer within sandbox \"34a7f2fed531242824f5be58a6739753ff96fbb50d8e6355098fe22386edad54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"52a7128429f88f7a6d199901db863b003aad659a377a4808d8336e81cd4d6006\"" Sep 13 00:08:25.488619 containerd[1977]: time="2025-09-13T00:08:25.488589299Z" level=info msg="StartContainer for \"52a7128429f88f7a6d199901db863b003aad659a377a4808d8336e81cd4d6006\"" Sep 13 00:08:25.522617 systemd[1]: Started cri-containerd-52a7128429f88f7a6d199901db863b003aad659a377a4808d8336e81cd4d6006.scope - libcontainer container 52a7128429f88f7a6d199901db863b003aad659a377a4808d8336e81cd4d6006. Sep 13 00:08:25.569979 containerd[1977]: time="2025-09-13T00:08:25.569923768Z" level=info msg="StartContainer for \"52a7128429f88f7a6d199901db863b003aad659a377a4808d8336e81cd4d6006\" returns successfully" Sep 13 00:08:25.744845 systemd[1]: run-containerd-runc-k8s.io-52a7128429f88f7a6d199901db863b003aad659a377a4808d8336e81cd4d6006-runc.KOnVVS.mount: Deactivated successfully. Sep 13 00:08:30.996216 systemd[1]: run-containerd-runc-k8s.io-3765aa5ac6e51347900b6380e8ef5f0ceb4eca528d15dea461b1b1629449222b-runc.ueagY2.mount: Deactivated successfully. Sep 13 00:08:31.086612 kubelet[3167]: E0913 00:08:31.079314 3167 request.go:1255] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Sep 13 00:08:31.088658 kubelet[3167]: E0913 00:08:31.086714 3167 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Sep 13 00:08:32.437874 systemd[1]: run-containerd-runc-k8s.io-811a954cbb4eba6049149712dfe25ce6e0d9bb085cce564ed4053774476c8f51-runc.Vl2446.mount: Deactivated successfully. Sep 13 00:08:33.156726 systemd[1]: run-containerd-runc-k8s.io-2079427b27eb8b5ca3cbeaefac8f7afababa321007fb44d945797ab54fb3ae76-runc.lAZJBy.mount: Deactivated successfully. Sep 13 00:08:33.190761 systemd[1]: cri-containerd-69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71.scope: Deactivated successfully. Sep 13 00:08:33.222599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71-rootfs.mount: Deactivated successfully. Sep 13 00:08:33.247821 containerd[1977]: time="2025-09-13T00:08:33.247679479Z" level=info msg="shim disconnected" id=69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71 namespace=k8s.io Sep 13 00:08:33.248401 containerd[1977]: time="2025-09-13T00:08:33.247749292Z" level=warning msg="cleaning up after shim disconnected" id=69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71 namespace=k8s.io Sep 13 00:08:33.248401 containerd[1977]: time="2025-09-13T00:08:33.247859330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:33.534996 kubelet[3167]: I0913 00:08:33.534880 3167 scope.go:117] "RemoveContainer" containerID="d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3" Sep 13 00:08:33.535397 kubelet[3167]: I0913 00:08:33.535247 3167 scope.go:117] "RemoveContainer" containerID="69a7bf043625b7d7de4b38812b86be473939c71f436369c50df937a21c2f2e71" Sep 13 00:08:33.548404 kubelet[3167]: E0913 00:08:33.548304 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-tphlt_tigera-operator(12063c01-def6-410d-b8db-72b900710b9b)\"" pod="tigera-operator/tigera-operator-58fc44c59b-tphlt" podUID="12063c01-def6-410d-b8db-72b900710b9b" Sep 13 00:08:33.675462 containerd[1977]: time="2025-09-13T00:08:33.675388079Z" level=info msg="RemoveContainer for \"d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3\"" Sep 13 00:08:33.726038 containerd[1977]: time="2025-09-13T00:08:33.725984403Z" level=info msg="RemoveContainer for \"d9fcdc49849f733594951ceb4f203f5777ac13e984c2860c81b78df34c67f0d3\" returns successfully"