Sep 13 00:06:53.990162 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:06:53.990203 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:53.990223 kernel: BIOS-provided physical RAM map: Sep 13 00:06:53.990236 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:06:53.990248 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:06:53.990260 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 13 00:06:53.990276 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 13 00:06:53.990289 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:06:53.990303 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:06:53.990319 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:06:53.990332 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:06:53.990345 kernel: NX (Execute Disable) protection: active Sep 13 00:06:53.990358 kernel: APIC: Static calls initialized Sep 13 00:06:53.990372 kernel: efi: EFI v2.7 by EDK II Sep 13 00:06:53.990388 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 13 00:06:53.990406 kernel: SMBIOS 2.7 present. Sep 13 00:06:53.990420 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:06:53.990435 kernel: Hypervisor detected: KVM Sep 13 00:06:53.990449 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:06:53.990464 kernel: kvm-clock: using sched offset of 3733721844 cycles Sep 13 00:06:53.990479 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:06:53.990494 kernel: tsc: Detected 2499.998 MHz processor Sep 13 00:06:53.990509 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:06:53.990524 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:06:53.990539 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:06:53.990557 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:06:53.990572 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:06:53.990587 kernel: Using GB pages for direct mapping Sep 13 00:06:53.990602 kernel: Secure boot disabled Sep 13 00:06:53.990616 kernel: ACPI: Early table checksum verification disabled Sep 13 00:06:53.990631 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:06:53.990645 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:06:53.990661 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:06:53.990675 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:06:53.990693 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:06:53.990707 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:06:53.990722 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:06:53.990737 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:06:53.990751 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:06:53.990766 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:06:53.990787 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:06:53.990806 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:06:53.990822 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:06:53.990838 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:06:53.990853 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:06:53.990868 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:06:53.990883 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:06:53.990899 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:06:53.990918 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:06:53.990934 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:06:53.990949 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:06:53.990965 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:06:53.990981 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:06:53.990997 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:06:53.991013 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:06:53.991028 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:06:53.991043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:06:53.991084 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:06:53.991100 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:06:53.991116 kernel: Zone ranges: Sep 13 00:06:53.991131 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:06:53.991146 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:06:53.991161 kernel: Normal empty Sep 13 00:06:53.991177 kernel: Movable zone start for each node Sep 13 00:06:53.991193 kernel: Early memory node ranges Sep 13 00:06:53.991208 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:06:53.991239 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:06:53.991255 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:06:53.991271 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:06:53.991287 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:06:53.991302 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:06:53.991318 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:06:53.991333 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:06:53.991346 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:06:53.991360 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:06:53.991380 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:06:53.991395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:06:53.991411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:06:53.991427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:06:53.991442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:06:53.991457 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:06:53.991473 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:06:53.991489 kernel: TSC deadline timer available Sep 13 00:06:53.991505 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:06:53.991521 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:06:53.991539 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:06:53.991555 kernel: Booting paravirtualized kernel on KVM Sep 13 00:06:53.991571 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:06:53.991587 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:06:53.991603 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:06:53.991619 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:06:53.991633 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:06:53.991649 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:06:53.991665 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:06:53.991686 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:53.991702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:06:53.991718 kernel: random: crng init done Sep 13 00:06:53.991733 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:06:53.991749 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:06:53.991765 kernel: Fallback order for Node 0: 0 Sep 13 00:06:53.991781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:06:53.991801 kernel: Policy zone: DMA32 Sep 13 00:06:53.991816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:06:53.991832 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162936K reserved, 0K cma-reserved) Sep 13 00:06:53.991848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:06:53.991864 kernel: Kernel/User page tables isolation: enabled Sep 13 00:06:53.991879 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:06:53.991895 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:06:53.991910 kernel: Dynamic Preempt: voluntary Sep 13 00:06:53.991925 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:06:53.991941 kernel: rcu: RCU event tracing is enabled. Sep 13 00:06:53.991955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:06:53.991968 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:06:53.991982 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:06:53.991996 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:06:53.992008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:06:53.992020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:06:53.992040 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:06:53.992095 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:06:53.992116 kernel: Console: colour dummy device 80x25 Sep 13 00:06:53.992134 kernel: printk: console [tty0] enabled Sep 13 00:06:53.992149 kernel: printk: console [ttyS0] enabled Sep 13 00:06:53.992168 kernel: ACPI: Core revision 20230628 Sep 13 00:06:53.992184 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:06:53.992200 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:06:53.992216 kernel: x2apic enabled Sep 13 00:06:53.992232 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:06:53.992249 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:06:53.992269 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 13 00:06:53.992286 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:06:53.992302 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:06:53.992318 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:06:53.992334 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:06:53.992350 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:06:53.992366 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:06:53.992382 kernel: RETBleed: Vulnerable Sep 13 00:06:53.992398 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:06:53.992417 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:06:53.992433 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:06:53.992449 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:06:53.992464 kernel: active return thunk: its_return_thunk Sep 13 00:06:53.992480 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:06:53.992496 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:06:53.992513 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:06:53.992529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:06:53.992545 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:06:53.992561 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:06:53.992577 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:06:53.992596 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:06:53.992612 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:06:53.992628 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:06:53.992644 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:06:53.992660 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:06:53.992676 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:06:53.992692 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:06:53.992708 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:06:53.992724 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:06:53.992740 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:06:53.992756 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:06:53.992772 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:06:53.992791 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:06:53.992807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:06:53.992823 kernel: landlock: Up and running. Sep 13 00:06:53.992839 kernel: SELinux: Initializing. Sep 13 00:06:53.992865 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:53.992881 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:53.992897 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Sep 13 00:06:53.992914 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:53.992930 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:53.992947 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:53.992967 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:06:53.992983 kernel: signal: max sigframe size: 3632 Sep 13 00:06:53.993002 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:06:53.993019 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:06:53.993035 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:06:53.993051 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:06:53.993090 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:06:53.993106 kernel: .... node #0, CPUs: #1 Sep 13 00:06:53.993123 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:06:53.993144 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:06:53.993160 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:06:53.993177 kernel: smpboot: Max logical packages: 1 Sep 13 00:06:53.993193 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 13 00:06:53.993210 kernel: devtmpfs: initialized Sep 13 00:06:53.993226 kernel: x86/mm: Memory block size: 128MB Sep 13 00:06:53.993242 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:06:53.993258 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:06:53.993275 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:06:53.993294 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:06:53.993311 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:06:53.993327 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:06:53.993343 kernel: audit: type=2000 audit(1757722013.372:1): state=initialized audit_enabled=0 res=1 Sep 13 00:06:53.993359 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:06:53.993375 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:06:53.993391 kernel: cpuidle: using governor menu Sep 13 00:06:53.993407 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:06:53.993426 kernel: dca service started, version 1.12.1 Sep 13 00:06:53.993443 kernel: PCI: Using configuration type 1 for base access Sep 13 00:06:53.993459 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:06:53.993475 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:06:53.993492 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:06:53.993508 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:06:53.993525 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:06:53.993541 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:06:53.993557 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:06:53.993576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:06:53.993592 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:06:53.993608 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:06:53.993624 kernel: ACPI: Interpreter enabled Sep 13 00:06:53.993640 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:06:53.993657 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:06:53.993673 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:06:53.993689 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:06:53.993706 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:06:53.993722 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:06:53.993961 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:06:53.994132 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:06:53.994270 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:06:53.994289 kernel: acpiphp: Slot [3] registered Sep 13 00:06:53.994306 kernel: acpiphp: Slot [4] registered Sep 13 00:06:53.994322 kernel: acpiphp: Slot [5] registered Sep 13 00:06:53.994338 kernel: acpiphp: Slot [6] registered Sep 13 00:06:53.994359 kernel: acpiphp: Slot [7] registered Sep 13 00:06:53.994375 kernel: acpiphp: Slot [8] registered Sep 13 00:06:53.994391 kernel: acpiphp: Slot [9] registered Sep 13 00:06:53.994407 kernel: acpiphp: Slot [10] registered Sep 13 00:06:53.994423 kernel: acpiphp: Slot [11] registered Sep 13 00:06:53.994439 kernel: acpiphp: Slot [12] registered Sep 13 00:06:53.994455 kernel: acpiphp: Slot [13] registered Sep 13 00:06:53.994471 kernel: acpiphp: Slot [14] registered Sep 13 00:06:53.994487 kernel: acpiphp: Slot [15] registered Sep 13 00:06:53.994506 kernel: acpiphp: Slot [16] registered Sep 13 00:06:53.994523 kernel: acpiphp: Slot [17] registered Sep 13 00:06:53.994538 kernel: acpiphp: Slot [18] registered Sep 13 00:06:53.994555 kernel: acpiphp: Slot [19] registered Sep 13 00:06:53.994571 kernel: acpiphp: Slot [20] registered Sep 13 00:06:53.994587 kernel: acpiphp: Slot [21] registered Sep 13 00:06:53.994603 kernel: acpiphp: Slot [22] registered Sep 13 00:06:53.994619 kernel: acpiphp: Slot [23] registered Sep 13 00:06:53.994635 kernel: acpiphp: Slot [24] registered Sep 13 00:06:53.994651 kernel: acpiphp: Slot [25] registered Sep 13 00:06:53.994670 kernel: acpiphp: Slot [26] registered Sep 13 00:06:53.994686 kernel: acpiphp: Slot [27] registered Sep 13 00:06:53.994702 kernel: acpiphp: Slot [28] registered Sep 13 00:06:53.994718 kernel: acpiphp: Slot [29] registered Sep 13 00:06:53.994735 kernel: acpiphp: Slot [30] registered Sep 13 00:06:53.994751 kernel: acpiphp: Slot [31] registered Sep 13 00:06:53.994767 kernel: PCI host bridge to bus 0000:00 Sep 13 00:06:53.994903 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:06:53.995033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:06:53.995172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:06:53.995297 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:06:53.995422 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:06:53.995543 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:06:53.995700 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:06:53.995854 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:06:53.996005 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:06:53.996160 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:06:53.996290 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:06:53.996421 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:06:53.996556 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:06:53.996688 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:06:53.996824 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:06:53.996965 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:06:53.997128 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:06:53.997266 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:06:53.997396 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:06:53.997524 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:06:53.997652 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:06:53.997786 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:06:53.997921 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:06:54.000493 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:06:54.000704 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:06:54.000728 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:06:54.000745 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:06:54.000762 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:06:54.000778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:06:54.000801 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:06:54.000817 kernel: iommu: Default domain type: Translated Sep 13 00:06:54.000834 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:06:54.000850 kernel: efivars: Registered efivars operations Sep 13 00:06:54.000866 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:06:54.000883 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:06:54.000899 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:06:54.000915 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:06:54.002180 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:06:54.002379 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:06:54.002523 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:06:54.002544 kernel: vgaarb: loaded Sep 13 00:06:54.002561 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:06:54.002578 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:06:54.002594 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:06:54.002610 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:06:54.002627 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:06:54.002648 kernel: pnp: PnP ACPI init Sep 13 00:06:54.002665 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:06:54.002682 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:06:54.002698 kernel: NET: Registered PF_INET protocol family Sep 13 00:06:54.002715 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:06:54.002731 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:06:54.002747 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:06:54.002763 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:06:54.002779 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:06:54.002799 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:06:54.002815 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:54.002832 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:54.002848 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:06:54.002864 kernel: NET: Registered PF_XDP protocol family Sep 13 00:06:54.003009 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:06:54.004246 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:06:54.004757 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:06:54.004899 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:06:54.005025 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:06:54.005188 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:06:54.005210 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:06:54.005227 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:06:54.005243 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 13 00:06:54.005259 kernel: clocksource: Switched to clocksource tsc Sep 13 00:06:54.005274 kernel: Initialise system trusted keyrings Sep 13 00:06:54.005292 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:06:54.005313 kernel: Key type asymmetric registered Sep 13 00:06:54.005329 kernel: Asymmetric key parser 'x509' registered Sep 13 00:06:54.005345 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:06:54.005361 kernel: io scheduler mq-deadline registered Sep 13 00:06:54.005376 kernel: io scheduler kyber registered Sep 13 00:06:54.005392 kernel: io scheduler bfq registered Sep 13 00:06:54.005407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:06:54.008093 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:06:54.008121 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:06:54.008144 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:06:54.008161 kernel: i8042: Warning: Keylock active Sep 13 00:06:54.008178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:06:54.008194 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:06:54.008387 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:06:54.008517 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:06:54.008641 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:06:53 UTC (1757722013) Sep 13 00:06:54.008763 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:06:54.008786 kernel: intel_pstate: CPU model not supported Sep 13 00:06:54.008803 kernel: efifb: probing for efifb Sep 13 00:06:54.008820 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 13 00:06:54.008836 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:06:54.008853 kernel: efifb: scrolling: redraw Sep 13 00:06:54.008869 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:06:54.008886 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:06:54.008902 kernel: fb0: EFI VGA frame buffer device Sep 13 00:06:54.008918 kernel: pstore: Using crash dump compression: deflate Sep 13 00:06:54.008938 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:06:54.008955 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:06:54.008971 kernel: Segment Routing with IPv6 Sep 13 00:06:54.008988 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:06:54.009004 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:06:54.009021 kernel: Key type dns_resolver registered Sep 13 00:06:54.009723 kernel: IPI shorthand broadcast: enabled Sep 13 00:06:54.009748 kernel: sched_clock: Marking stable (562002597, 177526031)->(840439500, -100910872) Sep 13 00:06:54.009823 kernel: registered taskstats version 1 Sep 13 00:06:54.009840 kernel: Loading compiled-in X.509 certificates Sep 13 00:06:54.009855 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:06:54.009871 kernel: Key type .fscrypt registered Sep 13 00:06:54.009885 kernel: Key type fscrypt-provisioning registered Sep 13 00:06:54.009933 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:06:54.009977 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:06:54.009998 kernel: ima: No architecture policies found Sep 13 00:06:54.010014 kernel: clk: Disabling unused clocks Sep 13 00:06:54.010034 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:06:54.010050 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:06:54.011115 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:06:54.011135 kernel: Run /init as init process Sep 13 00:06:54.011153 kernel: with arguments: Sep 13 00:06:54.011170 kernel: /init Sep 13 00:06:54.011187 kernel: with environment: Sep 13 00:06:54.011204 kernel: HOME=/ Sep 13 00:06:54.011230 kernel: TERM=linux Sep 13 00:06:54.011246 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:06:54.011272 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:54.011291 systemd[1]: Detected virtualization amazon. Sep 13 00:06:54.011309 systemd[1]: Detected architecture x86-64. Sep 13 00:06:54.011326 systemd[1]: Running in initrd. Sep 13 00:06:54.011342 systemd[1]: No hostname configured, using default hostname. Sep 13 00:06:54.011360 systemd[1]: Hostname set to . Sep 13 00:06:54.011380 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:54.011398 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:06:54.011415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:54.011433 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:54.011451 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:06:54.011468 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:54.011486 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:06:54.011507 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:06:54.011527 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:06:54.011544 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:06:54.011562 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:54.011580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:54.011600 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:54.011618 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:54.011635 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:54.011652 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:54.011670 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:54.011688 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:54.011705 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:06:54.011723 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:06:54.011740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:54.011760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:54.011778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:54.011795 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:54.011812 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:06:54.011829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:54.011847 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:06:54.011865 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:06:54.011881 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:54.011902 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:54.011919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:54.011937 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:54.011987 systemd-journald[178]: Collecting audit messages is disabled. Sep 13 00:06:54.012026 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:54.012041 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:06:54.014170 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:06:54.014198 systemd-journald[178]: Journal started Sep 13 00:06:54.014240 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2fa573805cf1ce1061f2309919f987) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:06:54.014304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:54.010434 systemd-modules-load[179]: Inserted module 'overlay' Sep 13 00:06:54.031115 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:54.041127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:54.058093 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:06:54.063117 kernel: Bridge firewalling registered Sep 13 00:06:54.062304 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 13 00:06:54.062447 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:54.065734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:54.066676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:54.069887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:54.079559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:54.089332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:54.098444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:54.100000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:54.110387 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:06:54.116174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:54.117103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:54.134292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:54.143560 dracut-cmdline[211]: dracut-dracut-053 Sep 13 00:06:54.147012 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:54.170001 systemd-resolved[215]: Positive Trust Anchors: Sep 13 00:06:54.170174 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:54.170213 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:54.174988 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 13 00:06:54.177335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:54.177770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:54.228106 kernel: SCSI subsystem initialized Sep 13 00:06:54.238091 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:06:54.250089 kernel: iscsi: registered transport (tcp) Sep 13 00:06:54.272360 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:06:54.272433 kernel: QLogic iSCSI HBA Driver Sep 13 00:06:54.318295 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:54.327407 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:06:54.354863 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:06:54.354941 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:06:54.354963 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:06:54.400113 kernel: raid6: avx512x4 gen() 17479 MB/s Sep 13 00:06:54.418104 kernel: raid6: avx512x2 gen() 17397 MB/s Sep 13 00:06:54.436109 kernel: raid6: avx512x1 gen() 17452 MB/s Sep 13 00:06:54.454098 kernel: raid6: avx2x4 gen() 17266 MB/s Sep 13 00:06:54.472109 kernel: raid6: avx2x2 gen() 17400 MB/s Sep 13 00:06:54.491293 kernel: raid6: avx2x1 gen() 13334 MB/s Sep 13 00:06:54.491356 kernel: raid6: using algorithm avx512x4 gen() 17479 MB/s Sep 13 00:06:54.511358 kernel: raid6: .... xor() 7606 MB/s, rmw enabled Sep 13 00:06:54.511436 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:06:54.536096 kernel: xor: automatically using best checksumming function avx Sep 13 00:06:54.701091 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:06:54.711946 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:54.717376 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:54.738806 systemd-udevd[399]: Using default interface naming scheme 'v255'. Sep 13 00:06:54.743920 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:54.750234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:06:54.767335 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Sep 13 00:06:54.805945 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:54.814294 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:54.865403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:54.874955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:06:54.907387 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:54.909861 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:54.912047 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:54.913300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:54.921360 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:06:54.941307 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:54.979123 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:06:55.001881 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:06:55.002222 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:06:55.007921 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:55.011122 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:06:55.011161 kernel: AES CTR mode by8 optimization enabled Sep 13 00:06:55.009962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:55.013262 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:55.013824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:55.014727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:55.019221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:55.026775 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:06:55.030330 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:06:55.031009 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:06:55.035087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:55.046217 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:57:eb:ee:8b:ff Sep 13 00:06:55.051126 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:06:55.053241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:55.053374 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:55.071420 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:06:55.071490 kernel: GPT:9289727 != 16777215 Sep 13 00:06:55.071513 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:06:55.071535 kernel: GPT:9289727 != 16777215 Sep 13 00:06:55.071554 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:06:55.071575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:55.073524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:55.076218 (udev-worker)[458]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:06:55.102725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:55.107364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:55.136863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:55.157107 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (452) Sep 13 00:06:55.170386 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (450) Sep 13 00:06:55.199522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 13 00:06:55.219720 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 13 00:06:55.232243 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:06:55.242341 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 13 00:06:55.242870 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 13 00:06:55.249321 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:06:55.257394 disk-uuid[633]: Primary Header is updated. Sep 13 00:06:55.257394 disk-uuid[633]: Secondary Entries is updated. Sep 13 00:06:55.257394 disk-uuid[633]: Secondary Header is updated. Sep 13 00:06:55.263128 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:55.267088 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:55.272114 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:56.281872 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:06:56.281946 disk-uuid[634]: The operation has completed successfully. Sep 13 00:06:56.443662 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:06:56.443803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:06:56.455352 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:06:56.460685 sh[975]: Success Sep 13 00:06:56.477347 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:06:56.585483 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:06:56.593198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:06:56.598695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:06:56.637295 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:06:56.637361 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:56.637375 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:06:56.639739 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:06:56.641372 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:06:56.660125 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:06:56.665425 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:06:56.666551 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:06:56.671343 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:06:56.674220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:06:56.703971 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:56.704031 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:56.706720 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:06:56.715091 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:06:56.726876 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:06:56.729455 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:56.738838 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:06:56.748173 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:06:56.792895 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:56.799357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:56.834700 systemd-networkd[1167]: lo: Link UP Sep 13 00:06:56.834716 systemd-networkd[1167]: lo: Gained carrier Sep 13 00:06:56.837890 systemd-networkd[1167]: Enumeration completed Sep 13 00:06:56.842308 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:56.843545 systemd[1]: Reached target network.target - Network. Sep 13 00:06:56.850625 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:56.850630 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:56.858667 systemd-networkd[1167]: eth0: Link UP Sep 13 00:06:56.858673 systemd-networkd[1167]: eth0: Gained carrier Sep 13 00:06:56.858689 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:56.880172 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.16.22/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:06:56.894109 ignition[1114]: Ignition 2.19.0 Sep 13 00:06:56.894131 ignition[1114]: Stage: fetch-offline Sep 13 00:06:56.894408 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:56.894422 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:56.894931 ignition[1114]: Ignition finished successfully Sep 13 00:06:56.897934 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:56.903394 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:06:56.920976 ignition[1177]: Ignition 2.19.0 Sep 13 00:06:56.920994 ignition[1177]: Stage: fetch Sep 13 00:06:56.921458 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:56.921473 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:56.921598 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:56.939665 ignition[1177]: PUT result: OK Sep 13 00:06:56.942868 ignition[1177]: parsed url from cmdline: "" Sep 13 00:06:56.942885 ignition[1177]: no config URL provided Sep 13 00:06:56.942893 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:56.942906 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:56.942925 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:56.943742 ignition[1177]: PUT result: OK Sep 13 00:06:56.943796 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:06:56.944881 ignition[1177]: GET result: OK Sep 13 00:06:56.945035 ignition[1177]: parsing config with SHA512: 9158988b95d312dc20939df1ad89ca6ef4f53410eafae391eb5b1f4bb3a22cdeac9d73550a5cecea3a47281607afddd36722ec265089f7bc4efc52abaddb57b5 Sep 13 00:06:56.949980 unknown[1177]: fetched base config from "system" Sep 13 00:06:56.949993 unknown[1177]: fetched base config from "system" Sep 13 00:06:56.950373 ignition[1177]: fetch: fetch complete Sep 13 00:06:56.949999 unknown[1177]: fetched user config from "aws" Sep 13 00:06:56.950379 ignition[1177]: fetch: fetch passed Sep 13 00:06:56.951974 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:06:56.950431 ignition[1177]: Ignition finished successfully Sep 13 00:06:56.962329 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:06:56.976196 ignition[1184]: Ignition 2.19.0 Sep 13 00:06:56.976209 ignition[1184]: Stage: kargs Sep 13 00:06:56.976535 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:56.976545 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:56.976627 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:56.977526 ignition[1184]: PUT result: OK Sep 13 00:06:56.980339 ignition[1184]: kargs: kargs passed Sep 13 00:06:56.980409 ignition[1184]: Ignition finished successfully Sep 13 00:06:56.982405 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:06:56.991471 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:06:57.004461 ignition[1191]: Ignition 2.19.0 Sep 13 00:06:57.004474 ignition[1191]: Stage: disks Sep 13 00:06:57.004837 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:57.004848 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:57.004933 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:57.005744 ignition[1191]: PUT result: OK Sep 13 00:06:57.008554 ignition[1191]: disks: disks passed Sep 13 00:06:57.008622 ignition[1191]: Ignition finished successfully Sep 13 00:06:57.010361 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:06:57.011127 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:57.011637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:06:57.012249 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:57.012822 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:57.013430 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:57.019390 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:06:57.057550 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:06:57.060411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:06:57.066243 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:06:57.174087 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:06:57.174347 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:06:57.175384 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:57.190227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:57.193763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:06:57.194986 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:06:57.195047 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:06:57.195106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:57.210313 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:06:57.214254 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:06:57.218107 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1218) Sep 13 00:06:57.223312 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:57.223386 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:57.223401 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:06:57.241094 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:06:57.242858 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:57.318870 initrd-setup-root[1245]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:06:57.325359 initrd-setup-root[1252]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:06:57.330937 initrd-setup-root[1259]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:06:57.336577 initrd-setup-root[1266]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:06:57.452813 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:57.457201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:06:57.461269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:06:57.474110 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:57.504942 ignition[1334]: INFO : Ignition 2.19.0 Sep 13 00:06:57.504942 ignition[1334]: INFO : Stage: mount Sep 13 00:06:57.506797 ignition[1334]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:57.506797 ignition[1334]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:57.506797 ignition[1334]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:57.508085 ignition[1334]: INFO : PUT result: OK Sep 13 00:06:57.510189 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:06:57.511803 ignition[1334]: INFO : mount: mount passed Sep 13 00:06:57.511803 ignition[1334]: INFO : Ignition finished successfully Sep 13 00:06:57.512994 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:06:57.516215 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:06:57.632246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:06:57.637304 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:57.659108 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1346) Sep 13 00:06:57.662665 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:57.664379 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:57.664403 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:06:57.672118 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:06:57.674441 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:57.702660 ignition[1362]: INFO : Ignition 2.19.0 Sep 13 00:06:57.702660 ignition[1362]: INFO : Stage: files Sep 13 00:06:57.704359 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:57.704359 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:06:57.704359 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:06:57.704359 ignition[1362]: INFO : PUT result: OK Sep 13 00:06:57.707398 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:06:57.708367 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:06:57.708367 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:06:57.713363 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:06:57.714189 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:06:57.714189 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:06:57.713898 unknown[1362]: wrote ssh authorized keys file for user: core Sep 13 00:06:57.716577 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:06:57.716577 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:06:57.775637 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:06:58.340255 systemd-networkd[1167]: eth0: Gained IPv6LL Sep 13 00:06:58.650323 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:06:58.650323 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:06:58.652146 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:06:59.030767 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:07:00.273046 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:07:00.273046 ignition[1362]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:07:00.274988 ignition[1362]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:07:00.276168 ignition[1362]: INFO : files: files passed Sep 13 00:07:00.276168 ignition[1362]: INFO : Ignition finished successfully Sep 13 00:07:00.276803 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:07:00.282830 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:07:00.285210 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:07:00.287535 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:07:00.287661 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:07:00.308615 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:07:00.308615 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:07:00.311425 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:07:00.312739 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:07:00.313636 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:07:00.318310 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:07:00.348636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:07:00.348758 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:07:00.350181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:07:00.350791 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:07:00.351662 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:07:00.353218 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:07:00.379657 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:07:00.391466 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:07:00.400760 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:07:00.401439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:07:00.402286 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:07:00.403074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:07:00.403327 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:07:00.404280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:07:00.405079 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:07:00.405758 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:07:00.406451 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:07:00.407102 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:07:00.407834 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:07:00.408533 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:07:00.409249 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:07:00.410315 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:07:00.411091 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:07:00.411907 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:07:00.412039 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:07:00.413075 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:07:00.413802 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:07:00.414420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:07:00.415143 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:07:00.416283 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:07:00.416416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:07:00.417477 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:07:00.417596 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:07:00.418452 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:07:00.418553 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:07:00.429408 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:07:00.429889 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:07:00.430100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:07:00.433386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:07:00.433885 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:07:00.434086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:07:00.434560 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:07:00.434655 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:07:00.446171 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:07:00.446283 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:07:00.452822 ignition[1415]: INFO : Ignition 2.19.0 Sep 13 00:07:00.452822 ignition[1415]: INFO : Stage: umount Sep 13 00:07:00.452822 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:07:00.452822 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:07:00.452822 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:07:00.455866 ignition[1415]: INFO : PUT result: OK Sep 13 00:07:00.458349 ignition[1415]: INFO : umount: umount passed Sep 13 00:07:00.458349 ignition[1415]: INFO : Ignition finished successfully Sep 13 00:07:00.460833 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:07:00.461640 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:07:00.462331 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:07:00.462377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:07:00.463764 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:07:00.463813 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:07:00.464258 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:07:00.464611 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:07:00.465209 systemd[1]: Stopped target network.target - Network. Sep 13 00:07:00.465629 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:07:00.465677 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:07:00.465980 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:07:00.466262 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:07:00.473199 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:07:00.473675 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:07:00.474760 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:07:00.475592 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:07:00.475656 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:07:00.476266 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:07:00.476326 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:07:00.476910 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:07:00.476993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:07:00.477621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:07:00.477684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:07:00.478486 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:07:00.479286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:07:00.481519 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:07:00.482368 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:07:00.482498 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:07:00.483232 systemd-networkd[1167]: eth0: DHCPv6 lease lost Sep 13 00:07:00.483694 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:07:00.483819 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:07:00.487590 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:07:00.487770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:07:00.490884 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:07:00.490951 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:07:00.491871 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:07:00.491943 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:07:00.497209 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:07:00.497825 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:07:00.497920 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:07:00.500236 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:07:00.500311 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:07:00.502013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:07:00.502188 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:07:00.502742 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:07:00.502800 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:07:00.503682 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:07:00.517451 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:07:00.518090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:07:00.519687 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:07:00.519826 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:07:00.521976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:07:00.522070 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:07:00.522847 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:07:00.522884 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:07:00.523716 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:07:00.523764 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:07:00.524881 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:07:00.524950 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:07:00.526165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:07:00.526230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:07:00.535366 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:07:00.535884 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:07:00.535959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:07:00.538030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:07:00.538107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:07:00.542946 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:07:00.543069 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:07:00.545031 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:07:00.554340 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:07:00.563419 systemd[1]: Switching root. Sep 13 00:07:00.589250 systemd-journald[178]: Journal stopped Sep 13 00:07:02.505473 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 13 00:07:02.505568 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:07:02.505595 kernel: SELinux: policy capability open_perms=1 Sep 13 00:07:02.505623 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:07:02.505643 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:07:02.505670 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:07:02.505690 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:07:02.505706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:07:02.505723 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:07:02.505748 kernel: audit: type=1403 audit(1757722021.133:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:07:02.505775 systemd[1]: Successfully loaded SELinux policy in 83.804ms. Sep 13 00:07:02.505803 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.580ms. Sep 13 00:07:02.505830 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:07:02.505851 systemd[1]: Detected virtualization amazon. Sep 13 00:07:02.505873 systemd[1]: Detected architecture x86-64. Sep 13 00:07:02.505894 systemd[1]: Detected first boot. Sep 13 00:07:02.505916 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:07:02.505938 zram_generator::config[1457]: No configuration found. Sep 13 00:07:02.505961 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:07:02.505981 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:07:02.506011 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:07:02.506032 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:07:02.506095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:07:02.506119 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:07:02.506140 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:07:02.506162 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:07:02.506184 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:07:02.506207 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:07:02.506229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:07:02.506254 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:07:02.506277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:07:02.507623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:07:02.507656 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:07:02.507677 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:07:02.507695 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:07:02.507717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:07:02.507736 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:07:02.507756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:07:02.507785 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:07:02.507805 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:07:02.507823 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:07:02.507841 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:07:02.507859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:07:02.507880 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:07:02.507900 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:07:02.507919 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:07:02.507941 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:07:02.507960 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:07:02.507978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:07:02.507997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:07:02.508016 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:07:02.508034 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:07:02.508073 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:07:02.508110 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:07:02.508139 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:07:02.508164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:02.508184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:07:02.508205 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:07:02.508224 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:07:02.508245 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:07:02.508266 systemd[1]: Reached target machines.target - Containers. Sep 13 00:07:02.508287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:07:02.508309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:07:02.508336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:07:02.508357 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:07:02.508376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:07:02.508395 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:07:02.508414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:07:02.508434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:07:02.508455 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:07:02.508477 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:07:02.508501 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:07:02.508523 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:07:02.508543 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:07:02.508564 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:07:02.508585 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:07:02.508606 kernel: fuse: init (API version 7.39) Sep 13 00:07:02.508628 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:07:02.508650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:07:02.508671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:07:02.508697 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:07:02.508719 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:07:02.508739 systemd[1]: Stopped verity-setup.service. Sep 13 00:07:02.508758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:02.508776 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:07:02.508792 kernel: loop: module loaded Sep 13 00:07:02.508809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:07:02.508827 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:07:02.508846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:07:02.508913 systemd-journald[1549]: Collecting audit messages is disabled. Sep 13 00:07:02.508952 kernel: ACPI: bus type drm_connector registered Sep 13 00:07:02.508973 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:07:02.508997 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:07:02.509020 systemd-journald[1549]: Journal started Sep 13 00:07:02.509127 systemd-journald[1549]: Runtime Journal (/run/log/journal/ec2fa573805cf1ce1061f2309919f987) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:07:02.080430 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:07:02.102736 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 13 00:07:02.103481 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:07:02.511118 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:07:02.515158 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:07:02.516444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:07:02.517782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:07:02.518213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:07:02.519438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:02.519743 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:07:02.521160 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:07:02.521392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:07:02.522523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:02.522731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:07:02.524167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:07:02.524377 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:07:02.525766 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:02.526250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:07:02.527445 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:07:02.528767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:07:02.529984 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:07:02.549562 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:07:02.558198 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:07:02.567802 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:07:02.570581 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:07:02.570641 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:07:02.575320 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:07:02.583257 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:07:02.587983 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:07:02.588970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:07:02.600294 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:07:02.604477 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:07:02.605283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:07:02.608880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:07:02.611260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:07:02.619325 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:07:02.629347 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:07:02.634907 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:07:02.639456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:07:02.642684 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:07:02.645342 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:07:02.646407 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:07:02.674298 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:07:02.704728 systemd-journald[1549]: Time spent on flushing to /var/log/journal/ec2fa573805cf1ce1061f2309919f987 is 72.590ms for 986 entries. Sep 13 00:07:02.704728 systemd-journald[1549]: System Journal (/var/log/journal/ec2fa573805cf1ce1061f2309919f987) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:07:02.786201 systemd-journald[1549]: Received client request to flush runtime journal. Sep 13 00:07:02.786284 kernel: loop0: detected capacity change from 0 to 61336 Sep 13 00:07:02.716755 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:07:02.719030 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:07:02.735487 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:07:02.753305 udevadm[1593]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:07:02.782486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:07:02.789567 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:07:02.805303 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:07:02.804499 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:07:02.807634 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:07:02.832149 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 00:07:02.847577 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:07:02.860796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:07:02.899593 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Sep 13 00:07:02.899623 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Sep 13 00:07:02.915794 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:07:02.957798 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:07:03.038083 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:07:03.115089 kernel: loop4: detected capacity change from 0 to 61336 Sep 13 00:07:03.147100 kernel: loop5: detected capacity change from 0 to 224512 Sep 13 00:07:03.196114 kernel: loop6: detected capacity change from 0 to 142488 Sep 13 00:07:03.253086 kernel: loop7: detected capacity change from 0 to 140768 Sep 13 00:07:03.288442 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 13 00:07:03.291851 (sd-merge)[1611]: Merged extensions into '/usr'. Sep 13 00:07:03.302840 systemd[1]: Reloading requested from client PID 1586 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:07:03.302863 systemd[1]: Reloading... Sep 13 00:07:03.440105 zram_generator::config[1636]: No configuration found. Sep 13 00:07:03.627151 ldconfig[1581]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:07:03.691218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:03.782930 systemd[1]: Reloading finished in 476 ms. Sep 13 00:07:03.818960 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:07:03.820202 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:07:03.832480 systemd[1]: Starting ensure-sysext.service... Sep 13 00:07:03.835338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:07:03.858871 systemd[1]: Reloading requested from client PID 1689 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:07:03.858897 systemd[1]: Reloading... Sep 13 00:07:03.865080 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:07:03.865608 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:07:03.866990 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:07:03.867474 systemd-tmpfiles[1690]: ACLs are not supported, ignoring. Sep 13 00:07:03.867573 systemd-tmpfiles[1690]: ACLs are not supported, ignoring. Sep 13 00:07:03.874999 systemd-tmpfiles[1690]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:07:03.875020 systemd-tmpfiles[1690]: Skipping /boot Sep 13 00:07:03.892551 systemd-tmpfiles[1690]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:07:03.892575 systemd-tmpfiles[1690]: Skipping /boot Sep 13 00:07:03.996130 zram_generator::config[1723]: No configuration found. Sep 13 00:07:04.112252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:04.166709 systemd[1]: Reloading finished in 307 ms. Sep 13 00:07:04.186537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:07:04.189825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:07:04.202417 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:04.206303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:07:04.209318 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:07:04.219915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:07:04.224717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:07:04.230562 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:07:04.248901 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:07:04.254699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:04.255001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:07:04.264278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:07:04.279438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:07:04.282550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:07:04.283291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:07:04.283484 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:04.289665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:04.289983 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:07:04.290250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:07:04.290414 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:04.299558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:04.300765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:07:04.312105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:07:04.312922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:07:04.313254 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:07:04.313917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:07:04.315005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:07:04.316914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:07:04.323756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:07:04.324001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:07:04.339490 systemd[1]: Finished ensure-sysext.service. Sep 13 00:07:04.340626 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:07:04.350466 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:07:04.363774 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:07:04.364085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:07:04.365637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:07:04.371600 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:07:04.373167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:07:04.388995 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:07:04.395416 systemd-udevd[1777]: Using default interface naming scheme 'v255'. Sep 13 00:07:04.399384 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:07:04.402832 augenrules[1803]: No rules Sep 13 00:07:04.403985 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:04.407743 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:07:04.450049 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:07:04.461911 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:07:04.464376 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:07:04.471784 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:07:04.482357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:07:04.529581 systemd-resolved[1775]: Positive Trust Anchors: Sep 13 00:07:04.529604 systemd-resolved[1775]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:07:04.529652 systemd-resolved[1775]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:07:04.543164 systemd-resolved[1775]: Defaulting to hostname 'linux'. Sep 13 00:07:04.546015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:07:04.546657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:07:04.588050 systemd-networkd[1821]: lo: Link UP Sep 13 00:07:04.588084 systemd-networkd[1821]: lo: Gained carrier Sep 13 00:07:04.588919 systemd-networkd[1821]: Enumeration completed Sep 13 00:07:04.589031 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:07:04.589831 systemd[1]: Reached target network.target - Network. Sep 13 00:07:04.602225 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:07:04.615247 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:07:04.623164 (udev-worker)[1830]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:04.676930 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:07:04.676942 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:07:04.686678 systemd-networkd[1821]: eth0: Link UP Sep 13 00:07:04.687051 systemd-networkd[1821]: eth0: Gained carrier Sep 13 00:07:04.688779 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:07:04.696173 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.16.22/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:07:04.712098 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1824) Sep 13 00:07:04.777088 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:07:04.789092 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:07:04.798428 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:07:04.798744 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 13 00:07:04.801505 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:07:04.810091 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:07:04.928981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:07:04.934969 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:07:04.955786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:07:04.956027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:07:04.965948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:07:04.973311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:07:04.986414 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:07:04.987686 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:07:04.993312 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:07:05.014619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:07:05.023402 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:07:05.046096 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:07:05.046770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:07:05.052294 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:07:05.058592 lvm[1939]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:07:05.088485 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:07:05.092137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:07:05.092908 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:07:05.093552 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:07:05.094040 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:07:05.094897 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:07:05.095477 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:07:05.095890 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:07:05.096316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:07:05.096363 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:07:05.096747 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:07:05.098786 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:07:05.100752 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:07:05.105317 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:07:05.106458 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:07:05.106995 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:07:05.107549 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:07:05.107990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:07:05.108035 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:07:05.109184 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:07:05.113277 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:07:05.121283 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:07:05.125940 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:07:05.130270 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:07:05.131042 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:07:05.143485 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:07:05.147371 systemd[1]: Started ntpd.service - Network Time Service. Sep 13 00:07:05.153218 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:07:05.157171 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 13 00:07:05.165250 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:07:05.167755 jq[1949]: false Sep 13 00:07:05.168809 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:07:05.193434 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:07:05.194828 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:07:05.197785 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:07:05.202361 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:07:05.206019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:07:05.211651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:07:05.211883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:07:05.212322 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:07:05.212518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:07:05.249594 jq[1962]: true Sep 13 00:07:05.272714 dbus-daemon[1948]: [system] SELinux support is enabled Sep 13 00:07:05.272952 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:07:05.279565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:07:05.281651 dbus-daemon[1948]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:07:05.279615 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:07:05.280280 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:07:05.280307 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:07:05.293054 dbus-daemon[1948]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:07:05.300521 extend-filesystems[1950]: Found loop4 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found loop5 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found loop6 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found loop7 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p1 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p2 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p3 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found usr Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p4 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p6 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p7 Sep 13 00:07:05.300521 extend-filesystems[1950]: Found nvme0n1p9 Sep 13 00:07:05.300521 extend-filesystems[1950]: Checking size of /dev/nvme0n1p9 Sep 13 00:07:05.312164 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 00:07:05.334207 jq[1970]: true Sep 13 00:07:05.337898 (ntainerd)[1972]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:07:05.368393 update_engine[1960]: I20250913 00:07:05.368296 1960 main.cc:92] Flatcar Update Engine starting Sep 13 00:07:05.373953 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:07:05.374273 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:07:05.381551 update_engine[1960]: I20250913 00:07:05.381314 1960 update_check_scheduler.cc:74] Next update check in 4m42s Sep 13 00:07:05.388684 extend-filesystems[1950]: Resized partition /dev/nvme0n1p9 Sep 13 00:07:05.389783 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:07:05.407351 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:07:05.416706 extend-filesystems[2000]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:07:05.422868 tar[1974]: linux-amd64/LICENSE Sep 13 00:07:05.422868 tar[1974]: linux-amd64/helm Sep 13 00:07:05.423325 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:07:05.433235 ntpd[1952]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: ---------------------------------------------------- Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: corporation. Support and training for ntp-4 are Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: available at https://www.nwtime.org/support Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: ---------------------------------------------------- Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: proto: precision = 0.093 usec (-23) Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: basedate set to 2025-08-31 Sep 13 00:07:05.440663 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: gps base set to 2025-08-31 (week 2382) Sep 13 00:07:05.433267 ntpd[1952]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:07:05.433278 ntpd[1952]: ---------------------------------------------------- Sep 13 00:07:05.433289 ntpd[1952]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:07:05.433298 ntpd[1952]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:07:05.433310 ntpd[1952]: corporation. Support and training for ntp-4 are Sep 13 00:07:05.433320 ntpd[1952]: available at https://www.nwtime.org/support Sep 13 00:07:05.433330 ntpd[1952]: ---------------------------------------------------- Sep 13 00:07:05.437506 ntpd[1952]: proto: precision = 0.093 usec (-23) Sep 13 00:07:05.439892 ntpd[1952]: basedate set to 2025-08-31 Sep 13 00:07:05.439916 ntpd[1952]: gps base set to 2025-08-31 (week 2382) Sep 13 00:07:05.456620 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:07:05.456620 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:07:05.456620 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:07:05.456620 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Listen normally on 3 eth0 172.31.16.22:123 Sep 13 00:07:05.453430 ntpd[1952]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:07:05.453489 ntpd[1952]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:07:05.453686 ntpd[1952]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:07:05.453721 ntpd[1952]: Listen normally on 3 eth0 172.31.16.22:123 Sep 13 00:07:05.458347 ntpd[1952]: Listen normally on 4 lo [::1]:123 Sep 13 00:07:05.459807 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Listen normally on 4 lo [::1]:123 Sep 13 00:07:05.459807 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: bind(21) AF_INET6 fe80::457:ebff:feee:8bff%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:07:05.459807 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: unable to create socket on eth0 (5) for fe80::457:ebff:feee:8bff%2#123 Sep 13 00:07:05.459807 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: failed to init interface for address fe80::457:ebff:feee:8bff%2 Sep 13 00:07:05.459807 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: Listening on routing socket on fd #21 for interface updates Sep 13 00:07:05.458429 ntpd[1952]: bind(21) AF_INET6 fe80::457:ebff:feee:8bff%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:07:05.458453 ntpd[1952]: unable to create socket on eth0 (5) for fe80::457:ebff:feee:8bff%2#123 Sep 13 00:07:05.458468 ntpd[1952]: failed to init interface for address fe80::457:ebff:feee:8bff%2 Sep 13 00:07:05.458507 ntpd[1952]: Listening on routing socket on fd #21 for interface updates Sep 13 00:07:05.473725 ntpd[1952]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:07:05.479005 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:07:05.479005 ntpd[1952]: 13 Sep 00:07:05 ntpd[1952]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:07:05.473769 ntpd[1952]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:07:05.495701 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 13 00:07:05.553837 coreos-metadata[1947]: Sep 13 00:07:05.553 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.554 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.555 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.557 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.557 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.557 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.559 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.559 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.559 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.560 INFO Fetch failed with 404: resource not found Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.560 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.560 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.560 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.561 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.561 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.562 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.562 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.563 INFO Fetch successful Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.563 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 13 00:07:05.571224 coreos-metadata[1947]: Sep 13 00:07:05.564 INFO Fetch successful Sep 13 00:07:05.601411 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:07:05.605101 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:07:05.605101 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:07:05.605101 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:07:05.625200 extend-filesystems[1950]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:07:05.625867 bash[2019]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:07:05.607712 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:07:05.617429 systemd[1]: Starting sshkeys.service... Sep 13 00:07:05.619342 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:07:05.619603 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:07:05.620646 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:07:05.622944 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:07:05.659358 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1830) Sep 13 00:07:05.694254 systemd-logind[1957]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:07:05.694285 systemd-logind[1957]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:07:05.694310 systemd-logind[1957]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:07:05.695512 systemd-logind[1957]: New seat seat0. Sep 13 00:07:05.697855 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:07:05.712352 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:07:05.714470 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:07:05.776614 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:07:05.902978 coreos-metadata[2038]: Sep 13 00:07:05.902 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:07:05.905622 coreos-metadata[2038]: Sep 13 00:07:05.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 13 00:07:05.909095 coreos-metadata[2038]: Sep 13 00:07:05.908 INFO Fetch successful Sep 13 00:07:05.909095 coreos-metadata[2038]: Sep 13 00:07:05.908 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:07:05.912439 coreos-metadata[2038]: Sep 13 00:07:05.912 INFO Fetch successful Sep 13 00:07:05.919521 unknown[2038]: wrote ssh authorized keys file for user: core Sep 13 00:07:05.919721 locksmithd[1997]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:07:05.943457 dbus-daemon[1948]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:07:05.947842 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:07:05.950807 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 00:07:05.952226 dbus-daemon[1948]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1984 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:07:05.971524 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:07:05.974914 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 00:07:06.005498 update-ssh-keys[2098]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:07:06.006986 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:07:06.017966 systemd[1]: Finished sshkeys.service. Sep 13 00:07:06.024533 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:07:06.024784 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:07:06.028672 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:07:06.076560 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:07:06.087802 polkitd[2103]: Started polkitd version 121 Sep 13 00:07:06.088533 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:07:06.101536 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:07:06.103327 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:07:06.158729 polkitd[2103]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:07:06.163895 polkitd[2103]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:07:06.167582 polkitd[2103]: Finished loading, compiling and executing 2 rules Sep 13 00:07:06.175070 dbus-daemon[1948]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:07:06.175624 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 00:07:06.177441 polkitd[2103]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:07:06.208311 systemd-resolved[1775]: System hostname changed to 'ip-172-31-16-22'. Sep 13 00:07:06.208414 systemd-hostnamed[1984]: Hostname set to (transient) Sep 13 00:07:06.254766 containerd[1972]: time="2025-09-13T00:07:06.254673299Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:07:06.304423 containerd[1972]: time="2025-09-13T00:07:06.304149479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.306540 containerd[1972]: time="2025-09-13T00:07:06.306487419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:06.307114 containerd[1972]: time="2025-09-13T00:07:06.306655355Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:07:06.307114 containerd[1972]: time="2025-09-13T00:07:06.306701542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:07:06.307114 containerd[1972]: time="2025-09-13T00:07:06.306892340Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:07:06.307114 containerd[1972]: time="2025-09-13T00:07:06.306917433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.307114 containerd[1972]: time="2025-09-13T00:07:06.306989211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:06.307114 containerd[1972]: time="2025-09-13T00:07:06.307006949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.307582 containerd[1972]: time="2025-09-13T00:07:06.307553942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308132 containerd[1972]: time="2025-09-13T00:07:06.307643859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308132 containerd[1972]: time="2025-09-13T00:07:06.307672086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308132 containerd[1972]: time="2025-09-13T00:07:06.307689648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308132 containerd[1972]: time="2025-09-13T00:07:06.307810536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308132 containerd[1972]: time="2025-09-13T00:07:06.308090914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308528 containerd[1972]: time="2025-09-13T00:07:06.308503505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:07:06.308604 containerd[1972]: time="2025-09-13T00:07:06.308588957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:07:06.308761 containerd[1972]: time="2025-09-13T00:07:06.308745336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:07:06.308880 containerd[1972]: time="2025-09-13T00:07:06.308864939Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:07:06.315556 containerd[1972]: time="2025-09-13T00:07:06.315511298Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:07:06.316214 containerd[1972]: time="2025-09-13T00:07:06.315759051Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:07:06.316214 containerd[1972]: time="2025-09-13T00:07:06.315835867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:07:06.316214 containerd[1972]: time="2025-09-13T00:07:06.315870363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:07:06.316214 containerd[1972]: time="2025-09-13T00:07:06.315892814Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:07:06.316214 containerd[1972]: time="2025-09-13T00:07:06.316102443Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:07:06.316865 containerd[1972]: time="2025-09-13T00:07:06.316844731Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:07:06.317109 containerd[1972]: time="2025-09-13T00:07:06.317089321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317194041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317219648Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317241034Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317260834Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317280457Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317302205Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317325603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317344516Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317362418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317381952Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317408355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317430694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317450165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.317811 containerd[1972]: time="2025-09-13T00:07:06.317471014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317489288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317513762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317533208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317559732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317589764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317610615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317629147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317647528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317666869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317689051Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317719513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317737407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.318370 containerd[1972]: time="2025-09-13T00:07:06.317756024Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.318867898Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.318995016Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.319015995Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.319035629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.319051328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.319096228Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.319117829Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:07:06.319986 containerd[1972]: time="2025-09-13T00:07:06.319133216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:07:06.320361 containerd[1972]: time="2025-09-13T00:07:06.319538077Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:07:06.320361 containerd[1972]: time="2025-09-13T00:07:06.319631043Z" level=info msg="Connect containerd service" Sep 13 00:07:06.320361 containerd[1972]: time="2025-09-13T00:07:06.319690464Z" level=info msg="using legacy CRI server" Sep 13 00:07:06.320361 containerd[1972]: time="2025-09-13T00:07:06.319700291Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:07:06.320361 containerd[1972]: time="2025-09-13T00:07:06.319831060Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:07:06.325636 containerd[1972]: time="2025-09-13T00:07:06.325311079Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:07:06.325925 containerd[1972]: time="2025-09-13T00:07:06.325887235Z" level=info msg="Start subscribing containerd event" Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326013667Z" level=info msg="Start recovering state" Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326117874Z" level=info msg="Start event monitor" Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326133475Z" level=info msg="Start snapshots syncer" Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326145802Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326155675Z" level=info msg="Start streaming server" Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326494427Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:07:06.328207 containerd[1972]: time="2025-09-13T00:07:06.326550958Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:07:06.326709 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:07:06.329076 containerd[1972]: time="2025-09-13T00:07:06.328722756Z" level=info msg="containerd successfully booted in 0.075246s" Sep 13 00:07:06.433751 ntpd[1952]: bind(24) AF_INET6 fe80::457:ebff:feee:8bff%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:07:06.434189 ntpd[1952]: 13 Sep 00:07:06 ntpd[1952]: bind(24) AF_INET6 fe80::457:ebff:feee:8bff%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:07:06.434189 ntpd[1952]: 13 Sep 00:07:06 ntpd[1952]: unable to create socket on eth0 (6) for fe80::457:ebff:feee:8bff%2#123 Sep 13 00:07:06.434189 ntpd[1952]: 13 Sep 00:07:06 ntpd[1952]: failed to init interface for address fe80::457:ebff:feee:8bff%2 Sep 13 00:07:06.433825 ntpd[1952]: unable to create socket on eth0 (6) for fe80::457:ebff:feee:8bff%2#123 Sep 13 00:07:06.433840 ntpd[1952]: failed to init interface for address fe80::457:ebff:feee:8bff%2 Sep 13 00:07:06.579680 tar[1974]: linux-amd64/README.md Sep 13 00:07:06.595731 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:07:06.596240 systemd-networkd[1821]: eth0: Gained IPv6LL Sep 13 00:07:06.599330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:07:06.601127 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:07:06.610460 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 13 00:07:06.616271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:06.621329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:07:06.662048 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:07:06.672081 amazon-ssm-agent[2171]: Initializing new seelog logger Sep 13 00:07:06.672081 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Sep 13 00:07:06.672081 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.672081 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.672081 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 processing appconfig overrides Sep 13 00:07:06.672504 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.672549 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.672647 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 processing appconfig overrides Sep 13 00:07:06.672957 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.673004 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.673121 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 processing appconfig overrides Sep 13 00:07:06.673400 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO Proxy environment variables: Sep 13 00:07:06.675319 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.675392 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:07:06.675509 amazon-ssm-agent[2171]: 2025/09/13 00:07:06 processing appconfig overrides Sep 13 00:07:06.772875 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO https_proxy: Sep 13 00:07:06.870519 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO http_proxy: Sep 13 00:07:06.968774 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO no_proxy: Sep 13 00:07:06.977678 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO Checking if agent identity type OnPrem can be assumed Sep 13 00:07:06.977678 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO Checking if agent identity type EC2 can be assumed Sep 13 00:07:06.977678 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO Agent will take identity from EC2 Sep 13 00:07:06.977678 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:07:06.977678 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] Starting Core Agent Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [Registrar] Starting registrar module Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [EC2Identity] EC2 registration was successful. Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [CredentialRefresher] credentialRefresher has started Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [CredentialRefresher] Starting credentials refresher loop Sep 13 00:07:06.977894 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 13 00:07:07.066642 amazon-ssm-agent[2171]: 2025-09-13 00:07:06 INFO [CredentialRefresher] Next credential rotation will be in 30.09166124345 minutes Sep 13 00:07:07.773539 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:07:07.787499 systemd[1]: Started sshd@0-172.31.16.22:22-139.178.89.65:44940.service - OpenSSH per-connection server daemon (139.178.89.65:44940). Sep 13 00:07:07.954973 sshd[2190]: Accepted publickey for core from 139.178.89.65 port 44940 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:07.957522 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:07.969006 systemd-logind[1957]: New session 1 of user core. Sep 13 00:07:07.971515 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:07:07.980421 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:07:07.990991 amazon-ssm-agent[2171]: 2025-09-13 00:07:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 13 00:07:07.999130 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:07:08.008472 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:07:08.014940 (systemd)[2196]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:07:08.091718 amazon-ssm-agent[2171]: 2025-09-13 00:07:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2194) started Sep 13 00:07:08.144547 systemd[2196]: Queued start job for default target default.target. Sep 13 00:07:08.151557 systemd[2196]: Created slice app.slice - User Application Slice. Sep 13 00:07:08.151595 systemd[2196]: Reached target paths.target - Paths. Sep 13 00:07:08.151610 systemd[2196]: Reached target timers.target - Timers. Sep 13 00:07:08.154025 systemd[2196]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:07:08.174757 systemd[2196]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:07:08.175522 systemd[2196]: Reached target sockets.target - Sockets. Sep 13 00:07:08.175544 systemd[2196]: Reached target basic.target - Basic System. Sep 13 00:07:08.175587 systemd[2196]: Reached target default.target - Main User Target. Sep 13 00:07:08.175617 systemd[2196]: Startup finished in 151ms. Sep 13 00:07:08.175915 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:07:08.185313 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:07:08.194102 amazon-ssm-agent[2171]: 2025-09-13 00:07:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 13 00:07:08.326841 systemd[1]: Started sshd@1-172.31.16.22:22-139.178.89.65:44942.service - OpenSSH per-connection server daemon (139.178.89.65:44942). Sep 13 00:07:08.491976 sshd[2217]: Accepted publickey for core from 139.178.89.65 port 44942 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:08.493927 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:08.498751 systemd-logind[1957]: New session 2 of user core. Sep 13 00:07:08.505299 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:07:08.626776 sshd[2217]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:08.629716 systemd[1]: sshd@1-172.31.16.22:22-139.178.89.65:44942.service: Deactivated successfully. Sep 13 00:07:08.631272 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:07:08.632436 systemd-logind[1957]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:07:08.633653 systemd-logind[1957]: Removed session 2. Sep 13 00:07:08.669519 systemd[1]: Started sshd@2-172.31.16.22:22-139.178.89.65:44948.service - OpenSSH per-connection server daemon (139.178.89.65:44948). Sep 13 00:07:08.758035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:08.759869 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:07:08.762242 systemd[1]: Startup finished in 698ms (kernel) + 7.386s (initrd) + 7.707s (userspace) = 15.792s. Sep 13 00:07:08.771000 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:08.833203 sshd[2224]: Accepted publickey for core from 139.178.89.65 port 44948 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:08.834562 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:08.839278 systemd-logind[1957]: New session 3 of user core. Sep 13 00:07:08.845278 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:07:08.969923 sshd[2224]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:08.972966 systemd-logind[1957]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:07:08.973651 systemd[1]: sshd@2-172.31.16.22:22-139.178.89.65:44948.service: Deactivated successfully. Sep 13 00:07:08.975420 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:07:08.977311 systemd-logind[1957]: Removed session 3. Sep 13 00:07:09.433718 ntpd[1952]: Listen normally on 7 eth0 [fe80::457:ebff:feee:8bff%2]:123 Sep 13 00:07:09.434131 ntpd[1952]: 13 Sep 00:07:09 ntpd[1952]: Listen normally on 7 eth0 [fe80::457:ebff:feee:8bff%2]:123 Sep 13 00:07:09.922285 kubelet[2231]: E0913 00:07:09.922138 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:09.924907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:09.925077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:09.925343 systemd[1]: kubelet.service: Consumed 1.075s CPU time. Sep 13 00:07:13.063250 systemd-resolved[1775]: Clock change detected. Flushing caches. Sep 13 00:07:19.630363 systemd[1]: Started sshd@3-172.31.16.22:22-139.178.89.65:58316.service - OpenSSH per-connection server daemon (139.178.89.65:58316). Sep 13 00:07:19.787671 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 58316 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:19.789226 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:19.793495 systemd-logind[1957]: New session 4 of user core. Sep 13 00:07:19.796671 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:07:19.920605 sshd[2247]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:19.923930 systemd[1]: sshd@3-172.31.16.22:22-139.178.89.65:58316.service: Deactivated successfully. Sep 13 00:07:19.925492 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:07:19.926099 systemd-logind[1957]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:07:19.927005 systemd-logind[1957]: Removed session 4. Sep 13 00:07:19.956316 systemd[1]: Started sshd@4-172.31.16.22:22-139.178.89.65:58062.service - OpenSSH per-connection server daemon (139.178.89.65:58062). Sep 13 00:07:20.122800 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 58062 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:20.125928 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:20.130617 systemd-logind[1957]: New session 5 of user core. Sep 13 00:07:20.139726 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:07:20.256463 sshd[2254]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:20.259588 systemd[1]: sshd@4-172.31.16.22:22-139.178.89.65:58062.service: Deactivated successfully. Sep 13 00:07:20.261311 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:07:20.262462 systemd-logind[1957]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:07:20.263466 systemd-logind[1957]: Removed session 5. Sep 13 00:07:20.292828 systemd[1]: Started sshd@5-172.31.16.22:22-139.178.89.65:58076.service - OpenSSH per-connection server daemon (139.178.89.65:58076). Sep 13 00:07:20.448981 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 58076 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:20.450376 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:20.456450 systemd-logind[1957]: New session 6 of user core. Sep 13 00:07:20.466679 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:07:20.563377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:07:20.569673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:20.585495 sshd[2261]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:20.592334 systemd[1]: sshd@5-172.31.16.22:22-139.178.89.65:58076.service: Deactivated successfully. Sep 13 00:07:20.595740 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:07:20.596696 systemd-logind[1957]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:07:20.597970 systemd-logind[1957]: Removed session 6. Sep 13 00:07:20.619613 systemd[1]: Started sshd@6-172.31.16.22:22-139.178.89.65:58082.service - OpenSSH per-connection server daemon (139.178.89.65:58082). Sep 13 00:07:20.781664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:20.787235 sshd[2271]: Accepted publickey for core from 139.178.89.65 port 58082 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:20.788874 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:20.791813 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:20.796418 systemd-logind[1957]: New session 7 of user core. Sep 13 00:07:20.800817 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:07:20.854139 kubelet[2278]: E0913 00:07:20.853257 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:20.857655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:20.857853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:20.912902 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:07:20.913194 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:20.930207 sudo[2286]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:20.952972 sshd[2271]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:20.956068 systemd[1]: sshd@6-172.31.16.22:22-139.178.89.65:58082.service: Deactivated successfully. Sep 13 00:07:20.958102 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:07:20.959978 systemd-logind[1957]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:07:20.961370 systemd-logind[1957]: Removed session 7. Sep 13 00:07:20.992050 systemd[1]: Started sshd@7-172.31.16.22:22-139.178.89.65:58090.service - OpenSSH per-connection server daemon (139.178.89.65:58090). Sep 13 00:07:21.158101 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 58090 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:21.159521 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:21.164828 systemd-logind[1957]: New session 8 of user core. Sep 13 00:07:21.174672 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:07:21.271193 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:07:21.271535 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:21.275472 sudo[2295]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:21.280987 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:07:21.281272 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:21.295784 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:21.299077 auditctl[2298]: No rules Sep 13 00:07:21.299519 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:07:21.299757 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:21.302808 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:21.341196 augenrules[2316]: No rules Sep 13 00:07:21.342697 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:21.344203 sudo[2294]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:21.367096 sshd[2291]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:21.370731 systemd[1]: sshd@7-172.31.16.22:22-139.178.89.65:58090.service: Deactivated successfully. Sep 13 00:07:21.372882 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:21.374341 systemd-logind[1957]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:21.375706 systemd-logind[1957]: Removed session 8. Sep 13 00:07:21.400024 systemd[1]: Started sshd@8-172.31.16.22:22-139.178.89.65:58104.service - OpenSSH per-connection server daemon (139.178.89.65:58104). Sep 13 00:07:21.563458 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 58104 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:07:21.564928 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:21.569159 systemd-logind[1957]: New session 9 of user core. Sep 13 00:07:21.578629 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:07:21.676146 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:07:21.676480 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:22.049036 (dockerd)[2343]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:07:22.049362 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:07:22.424679 dockerd[2343]: time="2025-09-13T00:07:22.424611475Z" level=info msg="Starting up" Sep 13 00:07:22.542169 dockerd[2343]: time="2025-09-13T00:07:22.542126563Z" level=info msg="Loading containers: start." Sep 13 00:07:22.666424 kernel: Initializing XFRM netlink socket Sep 13 00:07:22.697521 (udev-worker)[2364]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:07:22.765004 systemd-networkd[1821]: docker0: Link UP Sep 13 00:07:22.788604 dockerd[2343]: time="2025-09-13T00:07:22.788554861Z" level=info msg="Loading containers: done." Sep 13 00:07:22.807603 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2376451848-merged.mount: Deactivated successfully. Sep 13 00:07:22.812188 dockerd[2343]: time="2025-09-13T00:07:22.812125607Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:07:22.812445 dockerd[2343]: time="2025-09-13T00:07:22.812401395Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:07:22.812589 dockerd[2343]: time="2025-09-13T00:07:22.812554030Z" level=info msg="Daemon has completed initialization" Sep 13 00:07:22.846135 dockerd[2343]: time="2025-09-13T00:07:22.845965468Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:07:22.846246 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:07:23.882672 containerd[1972]: time="2025-09-13T00:07:23.882619222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:07:24.429484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175477893.mount: Deactivated successfully. Sep 13 00:07:26.780317 containerd[1972]: time="2025-09-13T00:07:26.780171388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:26.781398 containerd[1972]: time="2025-09-13T00:07:26.781307845Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:07:26.783417 containerd[1972]: time="2025-09-13T00:07:26.782308808Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:26.785120 containerd[1972]: time="2025-09-13T00:07:26.785082258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:26.786277 containerd[1972]: time="2025-09-13T00:07:26.786243450Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.903579114s" Sep 13 00:07:26.786374 containerd[1972]: time="2025-09-13T00:07:26.786359496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:07:26.787366 containerd[1972]: time="2025-09-13T00:07:26.787344098Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:07:29.047446 containerd[1972]: time="2025-09-13T00:07:29.047360827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:29.049449 containerd[1972]: time="2025-09-13T00:07:29.049372457Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:07:29.051877 containerd[1972]: time="2025-09-13T00:07:29.051818014Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:29.057154 containerd[1972]: time="2025-09-13T00:07:29.056088772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:29.057154 containerd[1972]: time="2025-09-13T00:07:29.057033091Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.269562539s" Sep 13 00:07:29.057154 containerd[1972]: time="2025-09-13T00:07:29.057063733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:07:29.058099 containerd[1972]: time="2025-09-13T00:07:29.058069396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:07:30.919795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:07:30.928525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:31.222659 containerd[1972]: time="2025-09-13T00:07:31.222496263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.251761 containerd[1972]: time="2025-09-13T00:07:31.251700543Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:07:31.255464 containerd[1972]: time="2025-09-13T00:07:31.253433761Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.257972 containerd[1972]: time="2025-09-13T00:07:31.257684584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.258838 containerd[1972]: time="2025-09-13T00:07:31.258681574Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.200580931s" Sep 13 00:07:31.258838 containerd[1972]: time="2025-09-13T00:07:31.258736107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:07:31.259460 containerd[1972]: time="2025-09-13T00:07:31.259302309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:07:31.266682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:31.271941 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:31.314974 kubelet[2555]: E0913 00:07:31.314909 2555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:31.317425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:31.317582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:32.329302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505569777.mount: Deactivated successfully. Sep 13 00:07:33.194833 containerd[1972]: time="2025-09-13T00:07:33.194768065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:33.195907 containerd[1972]: time="2025-09-13T00:07:33.195678252Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:07:33.198522 containerd[1972]: time="2025-09-13T00:07:33.197131595Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:33.199624 containerd[1972]: time="2025-09-13T00:07:33.199330470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:33.200098 containerd[1972]: time="2025-09-13T00:07:33.200067852Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.940735757s" Sep 13 00:07:33.200278 containerd[1972]: time="2025-09-13T00:07:33.200103354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:07:33.200828 containerd[1972]: time="2025-09-13T00:07:33.200776204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:07:33.670074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777306754.mount: Deactivated successfully. Sep 13 00:07:34.717643 containerd[1972]: time="2025-09-13T00:07:34.717567819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:34.719175 containerd[1972]: time="2025-09-13T00:07:34.719118981Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:07:34.721351 containerd[1972]: time="2025-09-13T00:07:34.721277076Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:34.726454 containerd[1972]: time="2025-09-13T00:07:34.725251035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:34.726454 containerd[1972]: time="2025-09-13T00:07:34.726285279Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.525467839s" Sep 13 00:07:34.726454 containerd[1972]: time="2025-09-13T00:07:34.726316834Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:07:34.726996 containerd[1972]: time="2025-09-13T00:07:34.726967808Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:07:35.213785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827906645.mount: Deactivated successfully. Sep 13 00:07:35.226094 containerd[1972]: time="2025-09-13T00:07:35.226015629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:35.228140 containerd[1972]: time="2025-09-13T00:07:35.227886625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:07:35.231373 containerd[1972]: time="2025-09-13T00:07:35.230084743Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:35.234363 containerd[1972]: time="2025-09-13T00:07:35.233550738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:35.234363 containerd[1972]: time="2025-09-13T00:07:35.234201867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.121301ms" Sep 13 00:07:35.234363 containerd[1972]: time="2025-09-13T00:07:35.234232461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:07:35.234992 containerd[1972]: time="2025-09-13T00:07:35.234948754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:07:35.789858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200164495.mount: Deactivated successfully. Sep 13 00:07:36.857635 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:07:38.792595 containerd[1972]: time="2025-09-13T00:07:38.792524359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:38.793791 containerd[1972]: time="2025-09-13T00:07:38.793661768Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:07:38.794940 containerd[1972]: time="2025-09-13T00:07:38.794521201Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:38.797662 containerd[1972]: time="2025-09-13T00:07:38.797626227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:38.798691 containerd[1972]: time="2025-09-13T00:07:38.798658426Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.563566585s" Sep 13 00:07:38.798781 containerd[1972]: time="2025-09-13T00:07:38.798766857Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:07:41.419673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:07:41.428781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:41.735602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:41.746827 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:41.829431 kubelet[2712]: E0913 00:07:41.829329 2712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:41.833229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:41.833456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:42.289436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:42.295767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:42.333456 systemd[1]: Reloading requested from client PID 2726 ('systemctl') (unit session-9.scope)... Sep 13 00:07:42.333483 systemd[1]: Reloading... Sep 13 00:07:42.452417 zram_generator::config[2767]: No configuration found. Sep 13 00:07:42.592721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:42.680087 systemd[1]: Reloading finished in 345 ms. Sep 13 00:07:42.727621 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:07:42.727728 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:07:42.728124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:42.734865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:42.937341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:42.950849 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:43.001625 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:43.001625 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:43.001625 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:43.002000 kubelet[2828]: I0913 00:07:43.001662 2828 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:43.416481 kubelet[2828]: I0913 00:07:43.415595 2828 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:07:43.416481 kubelet[2828]: I0913 00:07:43.415630 2828 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:43.416481 kubelet[2828]: I0913 00:07:43.416162 2828 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:07:43.465650 kubelet[2828]: I0913 00:07:43.465591 2828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:43.467323 kubelet[2828]: E0913 00:07:43.466891 2828 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:43.490102 kubelet[2828]: E0913 00:07:43.490044 2828 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:43.490102 kubelet[2828]: I0913 00:07:43.490104 2828 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:43.496179 kubelet[2828]: I0913 00:07:43.496134 2828 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:43.498783 kubelet[2828]: I0913 00:07:43.498707 2828 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:43.498940 kubelet[2828]: I0913 00:07:43.498763 2828 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:43.502361 kubelet[2828]: I0913 00:07:43.502309 2828 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:43.502361 kubelet[2828]: I0913 00:07:43.502356 2828 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:07:43.504336 kubelet[2828]: I0913 00:07:43.504302 2828 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:43.511618 kubelet[2828]: I0913 00:07:43.511355 2828 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:07:43.511618 kubelet[2828]: I0913 00:07:43.511407 2828 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:43.511618 kubelet[2828]: I0913 00:07:43.511430 2828 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:07:43.511618 kubelet[2828]: I0913 00:07:43.511444 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:43.518799 kubelet[2828]: W0913 00:07:43.518508 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-22&limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:43.518799 kubelet[2828]: E0913 00:07:43.518566 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-22&limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:43.520265 kubelet[2828]: W0913 00:07:43.519343 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:43.520265 kubelet[2828]: E0913 00:07:43.519401 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:43.522409 kubelet[2828]: I0913 00:07:43.522299 2828 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:43.527409 kubelet[2828]: I0913 00:07:43.527264 2828 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:43.527409 kubelet[2828]: W0913 00:07:43.527335 2828 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:07:43.530154 kubelet[2828]: I0913 00:07:43.530131 2828 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:07:43.531107 kubelet[2828]: I0913 00:07:43.530271 2828 server.go:1287] "Started kubelet" Sep 13 00:07:43.536206 kubelet[2828]: I0913 00:07:43.535985 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:43.544222 kubelet[2828]: I0913 00:07:43.544167 2828 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:43.546423 kubelet[2828]: I0913 00:07:43.545464 2828 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:07:43.548883 kubelet[2828]: I0913 00:07:43.547726 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:43.548883 kubelet[2828]: I0913 00:07:43.547999 2828 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:43.552315 kubelet[2828]: I0913 00:07:43.550629 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:43.552620 kubelet[2828]: E0913 00:07:43.546751 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.22:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-22.1864aee76a96537a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-22,UID:ip-172-31-16-22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-22,},FirstTimestamp:2025-09-13 00:07:43.53025113 +0000 UTC m=+0.575523737,LastTimestamp:2025-09-13 00:07:43.53025113 +0000 UTC m=+0.575523737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-22,}" Sep 13 00:07:43.552822 kubelet[2828]: I0913 00:07:43.552795 2828 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:07:43.553634 kubelet[2828]: E0913 00:07:43.553613 2828 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-22\" not found" Sep 13 00:07:43.554650 kubelet[2828]: E0913 00:07:43.554599 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-22?timeout=10s\": dial tcp 172.31.16.22:6443: connect: connection refused" interval="200ms" Sep 13 00:07:43.559766 kubelet[2828]: I0913 00:07:43.559744 2828 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:07:43.560156 kubelet[2828]: I0913 00:07:43.560141 2828 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:43.564135 kubelet[2828]: E0913 00:07:43.563607 2828 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:43.564135 kubelet[2828]: I0913 00:07:43.563841 2828 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:43.564135 kubelet[2828]: I0913 00:07:43.563855 2828 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:43.564135 kubelet[2828]: I0913 00:07:43.563964 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:43.573516 kubelet[2828]: I0913 00:07:43.573467 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:43.575312 kubelet[2828]: I0913 00:07:43.575281 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:43.575694 kubelet[2828]: I0913 00:07:43.575682 2828 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:07:43.576162 kubelet[2828]: I0913 00:07:43.575795 2828 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:07:43.576162 kubelet[2828]: I0913 00:07:43.575808 2828 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:07:43.576162 kubelet[2828]: E0913 00:07:43.575872 2828 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:43.583343 kubelet[2828]: W0913 00:07:43.583279 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:43.584687 kubelet[2828]: E0913 00:07:43.583350 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:43.586796 kubelet[2828]: W0913 00:07:43.586743 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:43.586896 kubelet[2828]: E0913 00:07:43.586835 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:43.596708 kubelet[2828]: I0913 00:07:43.596472 2828 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:07:43.596708 kubelet[2828]: I0913 00:07:43.596487 2828 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:43.596708 kubelet[2828]: I0913 00:07:43.596504 2828 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:43.602077 kubelet[2828]: I0913 00:07:43.602033 2828 policy_none.go:49] "None policy: Start" Sep 13 00:07:43.602077 kubelet[2828]: I0913 00:07:43.602072 2828 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:07:43.602077 kubelet[2828]: I0913 00:07:43.602088 2828 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:43.611891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:07:43.625154 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:07:43.629357 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:07:43.641302 kubelet[2828]: I0913 00:07:43.640723 2828 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:43.641302 kubelet[2828]: I0913 00:07:43.640992 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:43.641302 kubelet[2828]: I0913 00:07:43.641008 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:43.644005 kubelet[2828]: I0913 00:07:43.643967 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:43.645152 kubelet[2828]: E0913 00:07:43.645026 2828 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:07:43.645152 kubelet[2828]: E0913 00:07:43.645075 2828 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-22\" not found" Sep 13 00:07:43.690275 systemd[1]: Created slice kubepods-burstable-podea5ccb6d983efd501e5e15a3ce82c5e0.slice - libcontainer container kubepods-burstable-podea5ccb6d983efd501e5e15a3ce82c5e0.slice. Sep 13 00:07:43.695266 kubelet[2828]: E0913 00:07:43.695229 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:43.701131 systemd[1]: Created slice kubepods-burstable-pod1333aefbc5c869185ed842386f619038.slice - libcontainer container kubepods-burstable-pod1333aefbc5c869185ed842386f619038.slice. Sep 13 00:07:43.705051 kubelet[2828]: E0913 00:07:43.704942 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:43.714149 systemd[1]: Created slice kubepods-burstable-podb7a8bb453b43df2e5ef05139bb852a32.slice - libcontainer container kubepods-burstable-podb7a8bb453b43df2e5ef05139bb852a32.slice. Sep 13 00:07:43.718570 kubelet[2828]: E0913 00:07:43.718536 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:43.745365 kubelet[2828]: I0913 00:07:43.745158 2828 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-22" Sep 13 00:07:43.745701 kubelet[2828]: E0913 00:07:43.745542 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.22:6443/api/v1/nodes\": dial tcp 172.31.16.22:6443: connect: connection refused" node="ip-172-31-16-22" Sep 13 00:07:43.755413 kubelet[2828]: E0913 00:07:43.755238 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-22?timeout=10s\": dial tcp 172.31.16.22:6443: connect: connection refused" interval="400ms" Sep 13 00:07:43.761598 kubelet[2828]: I0913 00:07:43.761540 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:43.761598 kubelet[2828]: I0913 00:07:43.761585 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:43.761598 kubelet[2828]: I0913 00:07:43.761610 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:43.761598 kubelet[2828]: I0913 00:07:43.761631 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:43.761598 kubelet[2828]: I0913 00:07:43.761648 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7a8bb453b43df2e5ef05139bb852a32-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-22\" (UID: \"b7a8bb453b43df2e5ef05139bb852a32\") " pod="kube-system/kube-scheduler-ip-172-31-16-22" Sep 13 00:07:43.762010 kubelet[2828]: I0913 00:07:43.761665 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea5ccb6d983efd501e5e15a3ce82c5e0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-22\" (UID: \"ea5ccb6d983efd501e5e15a3ce82c5e0\") " pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:43.762010 kubelet[2828]: I0913 00:07:43.761679 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea5ccb6d983efd501e5e15a3ce82c5e0-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-22\" (UID: \"ea5ccb6d983efd501e5e15a3ce82c5e0\") " pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:43.762010 kubelet[2828]: I0913 00:07:43.761695 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:43.762010 kubelet[2828]: I0913 00:07:43.761710 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea5ccb6d983efd501e5e15a3ce82c5e0-ca-certs\") pod \"kube-apiserver-ip-172-31-16-22\" (UID: \"ea5ccb6d983efd501e5e15a3ce82c5e0\") " pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:43.948294 kubelet[2828]: I0913 00:07:43.948082 2828 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-22" Sep 13 00:07:43.948421 kubelet[2828]: E0913 00:07:43.948406 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.22:6443/api/v1/nodes\": dial tcp 172.31.16.22:6443: connect: connection refused" node="ip-172-31-16-22" Sep 13 00:07:43.998159 containerd[1972]: time="2025-09-13T00:07:43.998116545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-22,Uid:ea5ccb6d983efd501e5e15a3ce82c5e0,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:44.006415 containerd[1972]: time="2025-09-13T00:07:44.006338694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-22,Uid:1333aefbc5c869185ed842386f619038,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:44.020590 containerd[1972]: time="2025-09-13T00:07:44.020527170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-22,Uid:b7a8bb453b43df2e5ef05139bb852a32,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:44.155808 kubelet[2828]: E0913 00:07:44.155753 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-22?timeout=10s\": dial tcp 172.31.16.22:6443: connect: connection refused" interval="800ms" Sep 13 00:07:44.350873 kubelet[2828]: I0913 00:07:44.350775 2828 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-22" Sep 13 00:07:44.351332 kubelet[2828]: E0913 00:07:44.351296 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.22:6443/api/v1/nodes\": dial tcp 172.31.16.22:6443: connect: connection refused" node="ip-172-31-16-22" Sep 13 00:07:44.484287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468046886.mount: Deactivated successfully. Sep 13 00:07:44.502699 containerd[1972]: time="2025-09-13T00:07:44.502571735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:44.504722 containerd[1972]: time="2025-09-13T00:07:44.504674688Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:44.506763 containerd[1972]: time="2025-09-13T00:07:44.506705202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:07:44.508758 containerd[1972]: time="2025-09-13T00:07:44.508705569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:44.510600 containerd[1972]: time="2025-09-13T00:07:44.510549927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:44.512990 containerd[1972]: time="2025-09-13T00:07:44.512944162Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:44.514727 containerd[1972]: time="2025-09-13T00:07:44.514665890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:44.517969 containerd[1972]: time="2025-09-13T00:07:44.517903507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:44.519899 containerd[1972]: time="2025-09-13T00:07:44.519123032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.651637ms" Sep 13 00:07:44.529693 containerd[1972]: time="2025-09-13T00:07:44.529639076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.441653ms" Sep 13 00:07:44.531362 containerd[1972]: time="2025-09-13T00:07:44.530698525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.071399ms" Sep 13 00:07:44.611879 kubelet[2828]: W0913 00:07:44.611375 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-22&limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:44.612327 kubelet[2828]: E0913 00:07:44.612269 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-22&limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:44.752440 containerd[1972]: time="2025-09-13T00:07:44.752242165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:44.752440 containerd[1972]: time="2025-09-13T00:07:44.752343103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:44.753209 containerd[1972]: time="2025-09-13T00:07:44.752365359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:44.753209 containerd[1972]: time="2025-09-13T00:07:44.752518159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:44.757931 containerd[1972]: time="2025-09-13T00:07:44.754606587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:44.757931 containerd[1972]: time="2025-09-13T00:07:44.754683363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:44.757931 containerd[1972]: time="2025-09-13T00:07:44.754715176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:44.760243 containerd[1972]: time="2025-09-13T00:07:44.757671958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:44.765040 containerd[1972]: time="2025-09-13T00:07:44.764240679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:44.765040 containerd[1972]: time="2025-09-13T00:07:44.764987657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:44.765769 containerd[1972]: time="2025-09-13T00:07:44.765525964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:44.766357 containerd[1972]: time="2025-09-13T00:07:44.765728569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:44.808657 systemd[1]: Started cri-containerd-51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d.scope - libcontainer container 51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d. Sep 13 00:07:44.810351 systemd[1]: Started cri-containerd-7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245.scope - libcontainer container 7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245. Sep 13 00:07:44.813031 systemd[1]: Started cri-containerd-b814a468ad98c0d3bd88e856083a572267dfb38a0f9f0a8611f74a7cab001c09.scope - libcontainer container b814a468ad98c0d3bd88e856083a572267dfb38a0f9f0a8611f74a7cab001c09. Sep 13 00:07:44.921808 containerd[1972]: time="2025-09-13T00:07:44.921735124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-22,Uid:ea5ccb6d983efd501e5e15a3ce82c5e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b814a468ad98c0d3bd88e856083a572267dfb38a0f9f0a8611f74a7cab001c09\"" Sep 13 00:07:44.922797 kubelet[2828]: W0913 00:07:44.922715 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:44.924758 kubelet[2828]: E0913 00:07:44.923590 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:44.931557 containerd[1972]: time="2025-09-13T00:07:44.931112939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-22,Uid:b7a8bb453b43df2e5ef05139bb852a32,Namespace:kube-system,Attempt:0,} returns sandbox id \"51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d\"" Sep 13 00:07:44.934415 containerd[1972]: time="2025-09-13T00:07:44.934170368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-22,Uid:1333aefbc5c869185ed842386f619038,Namespace:kube-system,Attempt:0,} returns sandbox id \"7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245\"" Sep 13 00:07:44.935931 containerd[1972]: time="2025-09-13T00:07:44.935891567Z" level=info msg="CreateContainer within sandbox \"b814a468ad98c0d3bd88e856083a572267dfb38a0f9f0a8611f74a7cab001c09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:07:44.937185 containerd[1972]: time="2025-09-13T00:07:44.936935590Z" level=info msg="CreateContainer within sandbox \"51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:07:44.938760 containerd[1972]: time="2025-09-13T00:07:44.938600025Z" level=info msg="CreateContainer within sandbox \"7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:07:44.956361 kubelet[2828]: E0913 00:07:44.956305 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-22?timeout=10s\": dial tcp 172.31.16.22:6443: connect: connection refused" interval="1.6s" Sep 13 00:07:44.979795 containerd[1972]: time="2025-09-13T00:07:44.979645442Z" level=info msg="CreateContainer within sandbox \"51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3\"" Sep 13 00:07:44.980585 containerd[1972]: time="2025-09-13T00:07:44.980536345Z" level=info msg="StartContainer for \"3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3\"" Sep 13 00:07:44.982111 kubelet[2828]: W0913 00:07:44.982053 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:44.982422 kubelet[2828]: E0913 00:07:44.982352 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:44.990718 containerd[1972]: time="2025-09-13T00:07:44.989914561Z" level=info msg="CreateContainer within sandbox \"b814a468ad98c0d3bd88e856083a572267dfb38a0f9f0a8611f74a7cab001c09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e3bc313680194df327edb8d8a259109b6e04865727b913bd2eca342b16f960d\"" Sep 13 00:07:44.990923 containerd[1972]: time="2025-09-13T00:07:44.990842818Z" level=info msg="CreateContainer within sandbox \"7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11\"" Sep 13 00:07:44.991465 kubelet[2828]: W0913 00:07:44.991337 2828 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.22:6443: connect: connection refused Sep 13 00:07:44.991465 kubelet[2828]: E0913 00:07:44.991415 2828 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:44.992127 containerd[1972]: time="2025-09-13T00:07:44.991970362Z" level=info msg="StartContainer for \"3e3bc313680194df327edb8d8a259109b6e04865727b913bd2eca342b16f960d\"" Sep 13 00:07:44.993317 containerd[1972]: time="2025-09-13T00:07:44.992048105Z" level=info msg="StartContainer for \"7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11\"" Sep 13 00:07:45.020210 systemd[1]: Started cri-containerd-3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3.scope - libcontainer container 3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3. Sep 13 00:07:45.043683 systemd[1]: Started cri-containerd-3e3bc313680194df327edb8d8a259109b6e04865727b913bd2eca342b16f960d.scope - libcontainer container 3e3bc313680194df327edb8d8a259109b6e04865727b913bd2eca342b16f960d. Sep 13 00:07:45.073816 systemd[1]: Started cri-containerd-7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11.scope - libcontainer container 7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11. Sep 13 00:07:45.151137 containerd[1972]: time="2025-09-13T00:07:45.151087322Z" level=info msg="StartContainer for \"3e3bc313680194df327edb8d8a259109b6e04865727b913bd2eca342b16f960d\" returns successfully" Sep 13 00:07:45.157258 kubelet[2828]: I0913 00:07:45.157008 2828 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-22" Sep 13 00:07:45.158753 kubelet[2828]: E0913 00:07:45.158338 2828 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.22:6443/api/v1/nodes\": dial tcp 172.31.16.22:6443: connect: connection refused" node="ip-172-31-16-22" Sep 13 00:07:45.165422 containerd[1972]: time="2025-09-13T00:07:45.163901727Z" level=info msg="StartContainer for \"3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3\" returns successfully" Sep 13 00:07:45.171415 containerd[1972]: time="2025-09-13T00:07:45.171348951Z" level=info msg="StartContainer for \"7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11\" returns successfully" Sep 13 00:07:45.607749 kubelet[2828]: E0913 00:07:45.607289 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:45.617805 kubelet[2828]: E0913 00:07:45.617593 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:45.620407 kubelet[2828]: E0913 00:07:45.618180 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:45.653274 kubelet[2828]: E0913 00:07:45.653209 2828 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.22:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:46.620191 kubelet[2828]: E0913 00:07:46.620130 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:46.623330 kubelet[2828]: E0913 00:07:46.622327 2828 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:46.761465 kubelet[2828]: I0913 00:07:46.761351 2828 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-22" Sep 13 00:07:48.048086 kubelet[2828]: E0913 00:07:48.048023 2828 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-22\" not found" node="ip-172-31-16-22" Sep 13 00:07:48.210402 kubelet[2828]: I0913 00:07:48.210212 2828 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-22" Sep 13 00:07:48.210402 kubelet[2828]: E0913 00:07:48.210253 2828 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-22\": node \"ip-172-31-16-22\" not found" Sep 13 00:07:48.255353 kubelet[2828]: I0913 00:07:48.255061 2828 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:48.262037 kubelet[2828]: E0913 00:07:48.262003 2828 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:48.262037 kubelet[2828]: I0913 00:07:48.262035 2828 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:48.271575 kubelet[2828]: E0913 00:07:48.271543 2828 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:48.271927 kubelet[2828]: I0913 00:07:48.271733 2828 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-22" Sep 13 00:07:48.274035 kubelet[2828]: E0913 00:07:48.273996 2828 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-22\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-22" Sep 13 00:07:48.521696 kubelet[2828]: I0913 00:07:48.521584 2828 apiserver.go:52] "Watching apiserver" Sep 13 00:07:48.561019 kubelet[2828]: I0913 00:07:48.560973 2828 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:50.418140 systemd[1]: Reloading requested from client PID 3106 ('systemctl') (unit session-9.scope)... Sep 13 00:07:50.418169 systemd[1]: Reloading... Sep 13 00:07:50.542430 zram_generator::config[3146]: No configuration found. Sep 13 00:07:50.675130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:50.836490 systemd[1]: Reloading finished in 417 ms. Sep 13 00:07:50.899159 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:50.912252 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:50.912554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:50.912627 systemd[1]: kubelet.service: Consumed 1.003s CPU time, 132.4M memory peak, 0B memory swap peak. Sep 13 00:07:50.919225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:51.186872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:51.201823 (kubelet)[3206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:51.274417 kubelet[3206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:51.274417 kubelet[3206]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:51.274417 kubelet[3206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:51.274417 kubelet[3206]: I0913 00:07:51.273478 3206 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:51.282446 kubelet[3206]: I0913 00:07:51.282405 3206 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:07:51.282446 kubelet[3206]: I0913 00:07:51.282434 3206 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:51.282738 kubelet[3206]: I0913 00:07:51.282721 3206 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:07:51.286144 kubelet[3206]: I0913 00:07:51.286023 3206 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:07:51.288568 kubelet[3206]: I0913 00:07:51.288535 3206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:51.303172 kubelet[3206]: E0913 00:07:51.303115 3206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:51.303172 kubelet[3206]: I0913 00:07:51.303157 3206 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:51.306131 kubelet[3206]: I0913 00:07:51.306089 3206 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:51.306396 kubelet[3206]: I0913 00:07:51.306345 3206 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:51.306669 kubelet[3206]: I0913 00:07:51.306426 3206 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:51.306811 kubelet[3206]: I0913 00:07:51.306683 3206 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:51.306811 kubelet[3206]: I0913 00:07:51.306697 3206 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:07:51.311044 kubelet[3206]: I0913 00:07:51.311003 3206 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:51.311278 kubelet[3206]: I0913 00:07:51.311254 3206 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:07:51.312161 kubelet[3206]: I0913 00:07:51.311418 3206 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:51.312161 kubelet[3206]: I0913 00:07:51.311454 3206 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:07:51.312161 kubelet[3206]: I0913 00:07:51.311468 3206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:51.315635 kubelet[3206]: I0913 00:07:51.315552 3206 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:51.318365 kubelet[3206]: I0913 00:07:51.318322 3206 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:51.331414 kubelet[3206]: I0913 00:07:51.331361 3206 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:07:51.333131 kubelet[3206]: I0913 00:07:51.333074 3206 server.go:1287] "Started kubelet" Sep 13 00:07:51.334429 kubelet[3206]: I0913 00:07:51.334346 3206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:51.337462 kubelet[3206]: I0913 00:07:51.337430 3206 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:51.339240 kubelet[3206]: I0913 00:07:51.337649 3206 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:51.339240 kubelet[3206]: I0913 00:07:51.338157 3206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:51.340202 kubelet[3206]: I0913 00:07:51.340180 3206 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:07:51.346973 kubelet[3206]: I0913 00:07:51.346696 3206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:51.352731 kubelet[3206]: I0913 00:07:51.352672 3206 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:07:51.353570 kubelet[3206]: I0913 00:07:51.353551 3206 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:07:51.353991 kubelet[3206]: I0913 00:07:51.353976 3206 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:51.357781 kubelet[3206]: E0913 00:07:51.357724 3206 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:51.361780 kubelet[3206]: I0913 00:07:51.361571 3206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:51.365154 kubelet[3206]: I0913 00:07:51.365106 3206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:51.365336 kubelet[3206]: I0913 00:07:51.365284 3206 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:07:51.365336 kubelet[3206]: I0913 00:07:51.365317 3206 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:07:51.365336 kubelet[3206]: I0913 00:07:51.365326 3206 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:07:51.368076 kubelet[3206]: E0913 00:07:51.367667 3206 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:51.370555 kubelet[3206]: I0913 00:07:51.370533 3206 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:51.371305 kubelet[3206]: I0913 00:07:51.371289 3206 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:51.371516 kubelet[3206]: I0913 00:07:51.371497 3206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:51.447176 kubelet[3206]: I0913 00:07:51.447062 3206 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:07:51.447176 kubelet[3206]: I0913 00:07:51.447085 3206 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:51.447176 kubelet[3206]: I0913 00:07:51.447111 3206 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:51.447430 kubelet[3206]: I0913 00:07:51.447314 3206 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:07:51.447430 kubelet[3206]: I0913 00:07:51.447328 3206 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:07:51.447430 kubelet[3206]: I0913 00:07:51.447354 3206 policy_none.go:49] "None policy: Start" Sep 13 00:07:51.447430 kubelet[3206]: I0913 00:07:51.447366 3206 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:07:51.447430 kubelet[3206]: I0913 00:07:51.447394 3206 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:51.448973 kubelet[3206]: I0913 00:07:51.447609 3206 state_mem.go:75] "Updated machine memory state" Sep 13 00:07:51.457569 kubelet[3206]: I0913 00:07:51.457212 3206 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:51.457569 kubelet[3206]: I0913 00:07:51.457530 3206 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:51.457569 kubelet[3206]: I0913 00:07:51.457544 3206 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:51.458721 kubelet[3206]: I0913 00:07:51.458446 3206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:51.465440 kubelet[3206]: E0913 00:07:51.464432 3206 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:07:51.475803 kubelet[3206]: I0913 00:07:51.473223 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-22" Sep 13 00:07:51.475803 kubelet[3206]: I0913 00:07:51.473657 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:51.475803 kubelet[3206]: I0913 00:07:51.473933 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:51.554521 kubelet[3206]: I0913 00:07:51.554466 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea5ccb6d983efd501e5e15a3ce82c5e0-ca-certs\") pod \"kube-apiserver-ip-172-31-16-22\" (UID: \"ea5ccb6d983efd501e5e15a3ce82c5e0\") " pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:51.554892 kubelet[3206]: I0913 00:07:51.554801 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:51.555012 kubelet[3206]: I0913 00:07:51.555000 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:51.555086 kubelet[3206]: I0913 00:07:51.555077 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:51.555165 kubelet[3206]: I0913 00:07:51.555154 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:51.555243 kubelet[3206]: I0913 00:07:51.555234 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7a8bb453b43df2e5ef05139bb852a32-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-22\" (UID: \"b7a8bb453b43df2e5ef05139bb852a32\") " pod="kube-system/kube-scheduler-ip-172-31-16-22" Sep 13 00:07:51.555311 kubelet[3206]: I0913 00:07:51.555303 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea5ccb6d983efd501e5e15a3ce82c5e0-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-22\" (UID: \"ea5ccb6d983efd501e5e15a3ce82c5e0\") " pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:51.555395 kubelet[3206]: I0913 00:07:51.555375 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea5ccb6d983efd501e5e15a3ce82c5e0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-22\" (UID: \"ea5ccb6d983efd501e5e15a3ce82c5e0\") " pod="kube-system/kube-apiserver-ip-172-31-16-22" Sep 13 00:07:51.555481 kubelet[3206]: I0913 00:07:51.555472 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1333aefbc5c869185ed842386f619038-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-22\" (UID: \"1333aefbc5c869185ed842386f619038\") " pod="kube-system/kube-controller-manager-ip-172-31-16-22" Sep 13 00:07:51.579313 kubelet[3206]: I0913 00:07:51.579287 3206 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-22" Sep 13 00:07:51.590820 kubelet[3206]: I0913 00:07:51.590792 3206 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-22" Sep 13 00:07:51.591068 kubelet[3206]: I0913 00:07:51.591057 3206 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-22" Sep 13 00:07:51.600480 update_engine[1960]: I20250913 00:07:51.600356 1960 update_attempter.cc:509] Updating boot flags... Sep 13 00:07:51.727429 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3259) Sep 13 00:07:52.001079 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3262) Sep 13 00:07:52.311517 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3262) Sep 13 00:07:52.322825 kubelet[3206]: I0913 00:07:52.321705 3206 apiserver.go:52] "Watching apiserver" Sep 13 00:07:52.355115 kubelet[3206]: I0913 00:07:52.354829 3206 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:52.495769 kubelet[3206]: I0913 00:07:52.495594 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-22" podStartSLOduration=1.495568652 podStartE2EDuration="1.495568652s" podCreationTimestamp="2025-09-13 00:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:52.464758565 +0000 UTC m=+1.256453546" watchObservedRunningTime="2025-09-13 00:07:52.495568652 +0000 UTC m=+1.287263635" Sep 13 00:07:52.558406 kubelet[3206]: I0913 00:07:52.555057 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-22" podStartSLOduration=1.55503298 podStartE2EDuration="1.55503298s" podCreationTimestamp="2025-09-13 00:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:52.49889919 +0000 UTC m=+1.290594176" watchObservedRunningTime="2025-09-13 00:07:52.55503298 +0000 UTC m=+1.346727972" Sep 13 00:07:52.612500 kubelet[3206]: I0913 00:07:52.611253 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-22" podStartSLOduration=1.611230923 podStartE2EDuration="1.611230923s" podCreationTimestamp="2025-09-13 00:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:52.555517588 +0000 UTC m=+1.347212577" watchObservedRunningTime="2025-09-13 00:07:52.611230923 +0000 UTC m=+1.402925911" Sep 13 00:07:55.300156 kubelet[3206]: I0913 00:07:55.299944 3206 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:07:55.301153 containerd[1972]: time="2025-09-13T00:07:55.300942925Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:55.301541 kubelet[3206]: I0913 00:07:55.301278 3206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:07:56.333583 systemd[1]: Created slice kubepods-besteffort-podfa9bbbfa_f4a7_45fe_9d75_0f371f7f6508.slice - libcontainer container kubepods-besteffort-podfa9bbbfa_f4a7_45fe_9d75_0f371f7f6508.slice. Sep 13 00:07:56.394158 kubelet[3206]: I0913 00:07:56.393940 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508-kube-proxy\") pod \"kube-proxy-2zcwc\" (UID: \"fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508\") " pod="kube-system/kube-proxy-2zcwc" Sep 13 00:07:56.394158 kubelet[3206]: I0913 00:07:56.393994 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508-xtables-lock\") pod \"kube-proxy-2zcwc\" (UID: \"fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508\") " pod="kube-system/kube-proxy-2zcwc" Sep 13 00:07:56.394158 kubelet[3206]: I0913 00:07:56.394024 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508-lib-modules\") pod \"kube-proxy-2zcwc\" (UID: \"fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508\") " pod="kube-system/kube-proxy-2zcwc" Sep 13 00:07:56.394158 kubelet[3206]: I0913 00:07:56.394051 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tkm\" (UniqueName: \"kubernetes.io/projected/fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508-kube-api-access-s4tkm\") pod \"kube-proxy-2zcwc\" (UID: \"fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508\") " pod="kube-system/kube-proxy-2zcwc" Sep 13 00:07:56.413126 systemd[1]: Created slice kubepods-besteffort-pod188e4440_188b_4195_92cd_905b983082fc.slice - libcontainer container kubepods-besteffort-pod188e4440_188b_4195_92cd_905b983082fc.slice. Sep 13 00:07:56.494304 kubelet[3206]: I0913 00:07:56.494251 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/188e4440-188b-4195-92cd-905b983082fc-var-lib-calico\") pod \"tigera-operator-755d956888-pc8gv\" (UID: \"188e4440-188b-4195-92cd-905b983082fc\") " pod="tigera-operator/tigera-operator-755d956888-pc8gv" Sep 13 00:07:56.494471 kubelet[3206]: I0913 00:07:56.494367 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vfgk\" (UniqueName: \"kubernetes.io/projected/188e4440-188b-4195-92cd-905b983082fc-kube-api-access-2vfgk\") pod \"tigera-operator-755d956888-pc8gv\" (UID: \"188e4440-188b-4195-92cd-905b983082fc\") " pod="tigera-operator/tigera-operator-755d956888-pc8gv" Sep 13 00:07:56.645722 containerd[1972]: time="2025-09-13T00:07:56.645237910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zcwc,Uid:fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:56.683649 containerd[1972]: time="2025-09-13T00:07:56.682772871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:56.683649 containerd[1972]: time="2025-09-13T00:07:56.682823749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:56.683649 containerd[1972]: time="2025-09-13T00:07:56.682834671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:56.683649 containerd[1972]: time="2025-09-13T00:07:56.682907534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:56.711592 systemd[1]: Started cri-containerd-b6a133eedd6924b92a38cd47be164019a8253051b98fc8a1e5773e9ab1f07bb7.scope - libcontainer container b6a133eedd6924b92a38cd47be164019a8253051b98fc8a1e5773e9ab1f07bb7. Sep 13 00:07:56.719465 containerd[1972]: time="2025-09-13T00:07:56.719425656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pc8gv,Uid:188e4440-188b-4195-92cd-905b983082fc,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:07:56.739810 containerd[1972]: time="2025-09-13T00:07:56.739746261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zcwc,Uid:fa9bbbfa-f4a7-45fe-9d75-0f371f7f6508,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a133eedd6924b92a38cd47be164019a8253051b98fc8a1e5773e9ab1f07bb7\"" Sep 13 00:07:56.751833 containerd[1972]: time="2025-09-13T00:07:56.751698552Z" level=info msg="CreateContainer within sandbox \"b6a133eedd6924b92a38cd47be164019a8253051b98fc8a1e5773e9ab1f07bb7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:56.780417 containerd[1972]: time="2025-09-13T00:07:56.779631626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:56.780417 containerd[1972]: time="2025-09-13T00:07:56.779729631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:56.780417 containerd[1972]: time="2025-09-13T00:07:56.779747932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:56.780417 containerd[1972]: time="2025-09-13T00:07:56.779933300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:56.794873 containerd[1972]: time="2025-09-13T00:07:56.794755099Z" level=info msg="CreateContainer within sandbox \"b6a133eedd6924b92a38cd47be164019a8253051b98fc8a1e5773e9ab1f07bb7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4e296b52ab2d229528f9d45b3e3f90c7bf330541608921333b7a1eb515c4332\"" Sep 13 00:07:56.798108 containerd[1972]: time="2025-09-13T00:07:56.797165534Z" level=info msg="StartContainer for \"e4e296b52ab2d229528f9d45b3e3f90c7bf330541608921333b7a1eb515c4332\"" Sep 13 00:07:56.807252 systemd[1]: Started cri-containerd-d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761.scope - libcontainer container d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761. Sep 13 00:07:56.857654 systemd[1]: Started cri-containerd-e4e296b52ab2d229528f9d45b3e3f90c7bf330541608921333b7a1eb515c4332.scope - libcontainer container e4e296b52ab2d229528f9d45b3e3f90c7bf330541608921333b7a1eb515c4332. Sep 13 00:07:56.894566 containerd[1972]: time="2025-09-13T00:07:56.894124691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pc8gv,Uid:188e4440-188b-4195-92cd-905b983082fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761\"" Sep 13 00:07:56.902131 containerd[1972]: time="2025-09-13T00:07:56.901835417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:07:56.924222 containerd[1972]: time="2025-09-13T00:07:56.924168000Z" level=info msg="StartContainer for \"e4e296b52ab2d229528f9d45b3e3f90c7bf330541608921333b7a1eb515c4332\" returns successfully" Sep 13 00:07:57.440980 kubelet[3206]: I0913 00:07:57.440728 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2zcwc" podStartSLOduration=1.440711601 podStartE2EDuration="1.440711601s" podCreationTimestamp="2025-09-13 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:57.440699605 +0000 UTC m=+6.232394594" watchObservedRunningTime="2025-09-13 00:07:57.440711601 +0000 UTC m=+6.232406571" Sep 13 00:07:57.509532 systemd[1]: run-containerd-runc-k8s.io-b6a133eedd6924b92a38cd47be164019a8253051b98fc8a1e5773e9ab1f07bb7-runc.5tKUiC.mount: Deactivated successfully. Sep 13 00:07:58.129288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927465119.mount: Deactivated successfully. Sep 13 00:07:58.967992 containerd[1972]: time="2025-09-13T00:07:58.967904601Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:58.969846 containerd[1972]: time="2025-09-13T00:07:58.969778522Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:07:58.972165 containerd[1972]: time="2025-09-13T00:07:58.972099748Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:58.976293 containerd[1972]: time="2025-09-13T00:07:58.976246088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:58.978750 containerd[1972]: time="2025-09-13T00:07:58.978647399Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.076739045s" Sep 13 00:07:58.978750 containerd[1972]: time="2025-09-13T00:07:58.978696914Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:07:58.985404 containerd[1972]: time="2025-09-13T00:07:58.984830766Z" level=info msg="CreateContainer within sandbox \"d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:07:59.020021 containerd[1972]: time="2025-09-13T00:07:59.019949399Z" level=info msg="CreateContainer within sandbox \"d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009\"" Sep 13 00:07:59.020673 containerd[1972]: time="2025-09-13T00:07:59.020646322Z" level=info msg="StartContainer for \"ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009\"" Sep 13 00:07:59.045715 systemd[1]: run-containerd-runc-k8s.io-ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009-runc.QwyqAR.mount: Deactivated successfully. Sep 13 00:07:59.054600 systemd[1]: Started cri-containerd-ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009.scope - libcontainer container ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009. Sep 13 00:07:59.082781 containerd[1972]: time="2025-09-13T00:07:59.082734330Z" level=info msg="StartContainer for \"ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009\" returns successfully" Sep 13 00:08:00.799567 kubelet[3206]: I0913 00:08:00.799493 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-pc8gv" podStartSLOduration=2.718471664 podStartE2EDuration="4.798432928s" podCreationTimestamp="2025-09-13 00:07:56 +0000 UTC" firstStartedPulling="2025-09-13 00:07:56.9013136 +0000 UTC m=+5.693008580" lastFinishedPulling="2025-09-13 00:07:58.981274864 +0000 UTC m=+7.772969844" observedRunningTime="2025-09-13 00:07:59.443675309 +0000 UTC m=+8.235370297" watchObservedRunningTime="2025-09-13 00:08:00.798432928 +0000 UTC m=+9.590127954" Sep 13 00:08:06.398004 sudo[2327]: pam_unix(sudo:session): session closed for user root Sep 13 00:08:06.424747 sshd[2324]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:06.430168 systemd[1]: sshd@8-172.31.16.22:22-139.178.89.65:58104.service: Deactivated successfully. Sep 13 00:08:06.430325 systemd-logind[1957]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:08:06.436304 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:08:06.438336 systemd[1]: session-9.scope: Consumed 5.811s CPU time, 142.3M memory peak, 0B memory swap peak. Sep 13 00:08:06.447221 systemd-logind[1957]: Removed session 9. Sep 13 00:08:11.150016 systemd[1]: Created slice kubepods-besteffort-pod670c7250_7a37_499d_b972_fc0a3018dc7e.slice - libcontainer container kubepods-besteffort-pod670c7250_7a37_499d_b972_fc0a3018dc7e.slice. Sep 13 00:08:11.200073 kubelet[3206]: I0913 00:08:11.199972 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmfn\" (UniqueName: \"kubernetes.io/projected/670c7250-7a37-499d-b972-fc0a3018dc7e-kube-api-access-vvmfn\") pod \"calico-typha-89b6cc9fd-7trjz\" (UID: \"670c7250-7a37-499d-b972-fc0a3018dc7e\") " pod="calico-system/calico-typha-89b6cc9fd-7trjz" Sep 13 00:08:11.202117 kubelet[3206]: I0913 00:08:11.201899 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/670c7250-7a37-499d-b972-fc0a3018dc7e-tigera-ca-bundle\") pod \"calico-typha-89b6cc9fd-7trjz\" (UID: \"670c7250-7a37-499d-b972-fc0a3018dc7e\") " pod="calico-system/calico-typha-89b6cc9fd-7trjz" Sep 13 00:08:11.202117 kubelet[3206]: I0913 00:08:11.202032 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/670c7250-7a37-499d-b972-fc0a3018dc7e-typha-certs\") pod \"calico-typha-89b6cc9fd-7trjz\" (UID: \"670c7250-7a37-499d-b972-fc0a3018dc7e\") " pod="calico-system/calico-typha-89b6cc9fd-7trjz" Sep 13 00:08:11.457171 containerd[1972]: time="2025-09-13T00:08:11.457115243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-89b6cc9fd-7trjz,Uid:670c7250-7a37-499d-b972-fc0a3018dc7e,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:11.534418 containerd[1972]: time="2025-09-13T00:08:11.532219776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:11.534418 containerd[1972]: time="2025-09-13T00:08:11.532303656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:11.534418 containerd[1972]: time="2025-09-13T00:08:11.532340823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:11.534418 containerd[1972]: time="2025-09-13T00:08:11.532625497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:11.666665 systemd[1]: Started cri-containerd-b89862a4e29301a2d1702b3c47271ff9225f2590c85b0f884b902cfd4680bbed.scope - libcontainer container b89862a4e29301a2d1702b3c47271ff9225f2590c85b0f884b902cfd4680bbed. Sep 13 00:08:11.721845 systemd[1]: Created slice kubepods-besteffort-podfda06714_6718_433a_bbb4_051822ac6c81.slice - libcontainer container kubepods-besteffort-podfda06714_6718_433a_bbb4_051822ac6c81.slice. Sep 13 00:08:11.810968 kubelet[3206]: I0913 00:08:11.809889 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fda06714-6718-433a-bbb4-051822ac6c81-tigera-ca-bundle\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.810968 kubelet[3206]: I0913 00:08:11.809946 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55sp\" (UniqueName: \"kubernetes.io/projected/fda06714-6718-433a-bbb4-051822ac6c81-kube-api-access-m55sp\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.810968 kubelet[3206]: I0913 00:08:11.809979 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-cni-bin-dir\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.810968 kubelet[3206]: I0913 00:08:11.810000 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-lib-modules\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.810968 kubelet[3206]: I0913 00:08:11.810025 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-xtables-lock\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.811891 kubelet[3206]: I0913 00:08:11.810046 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-cni-log-dir\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.811891 kubelet[3206]: I0913 00:08:11.810066 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fda06714-6718-433a-bbb4-051822ac6c81-node-certs\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.811891 kubelet[3206]: I0913 00:08:11.810091 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-var-lib-calico\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.811891 kubelet[3206]: I0913 00:08:11.810116 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-var-run-calico\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.811891 kubelet[3206]: I0913 00:08:11.810144 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-cni-net-dir\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.812109 kubelet[3206]: I0913 00:08:11.810170 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-flexvol-driver-host\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.812109 kubelet[3206]: I0913 00:08:11.810194 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fda06714-6718-433a-bbb4-051822ac6c81-policysync\") pod \"calico-node-jb9s9\" (UID: \"fda06714-6718-433a-bbb4-051822ac6c81\") " pod="calico-system/calico-node-jb9s9" Sep 13 00:08:11.838207 containerd[1972]: time="2025-09-13T00:08:11.838149341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-89b6cc9fd-7trjz,Uid:670c7250-7a37-499d-b972-fc0a3018dc7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b89862a4e29301a2d1702b3c47271ff9225f2590c85b0f884b902cfd4680bbed\"" Sep 13 00:08:11.876468 containerd[1972]: time="2025-09-13T00:08:11.876424610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:08:11.901182 kubelet[3206]: E0913 00:08:11.900791 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:11.913378 kubelet[3206]: I0913 00:08:11.910964 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d46336d8-eb49-44a4-a6a0-97396eeb5284-socket-dir\") pod \"csi-node-driver-2hfvw\" (UID: \"d46336d8-eb49-44a4-a6a0-97396eeb5284\") " pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:11.913378 kubelet[3206]: I0913 00:08:11.911018 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d46336d8-eb49-44a4-a6a0-97396eeb5284-registration-dir\") pod \"csi-node-driver-2hfvw\" (UID: \"d46336d8-eb49-44a4-a6a0-97396eeb5284\") " pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:11.913378 kubelet[3206]: I0913 00:08:11.911098 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh85n\" (UniqueName: \"kubernetes.io/projected/d46336d8-eb49-44a4-a6a0-97396eeb5284-kube-api-access-wh85n\") pod \"csi-node-driver-2hfvw\" (UID: \"d46336d8-eb49-44a4-a6a0-97396eeb5284\") " pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:11.913378 kubelet[3206]: I0913 00:08:11.911189 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d46336d8-eb49-44a4-a6a0-97396eeb5284-varrun\") pod \"csi-node-driver-2hfvw\" (UID: \"d46336d8-eb49-44a4-a6a0-97396eeb5284\") " pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:11.913378 kubelet[3206]: I0913 00:08:11.911240 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d46336d8-eb49-44a4-a6a0-97396eeb5284-kubelet-dir\") pod \"csi-node-driver-2hfvw\" (UID: \"d46336d8-eb49-44a4-a6a0-97396eeb5284\") " pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:11.929339 kubelet[3206]: E0913 00:08:11.928973 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:11.929339 kubelet[3206]: W0913 00:08:11.929007 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:11.931111 kubelet[3206]: E0913 00:08:11.930079 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:11.959636 kubelet[3206]: E0913 00:08:11.958634 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:11.959636 kubelet[3206]: W0913 00:08:11.958661 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:11.959636 kubelet[3206]: E0913 00:08:11.958688 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.012180 kubelet[3206]: E0913 00:08:12.012040 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.012180 kubelet[3206]: W0913 00:08:12.012072 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.012180 kubelet[3206]: E0913 00:08:12.012118 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.014206 kubelet[3206]: E0913 00:08:12.014168 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.014206 kubelet[3206]: W0913 00:08:12.014197 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.014398 kubelet[3206]: E0913 00:08:12.014236 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.015209 kubelet[3206]: E0913 00:08:12.015184 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.015209 kubelet[3206]: W0913 00:08:12.015209 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.015534 kubelet[3206]: E0913 00:08:12.015246 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.015599 kubelet[3206]: E0913 00:08:12.015548 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.015599 kubelet[3206]: W0913 00:08:12.015559 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.015890 kubelet[3206]: E0913 00:08:12.015796 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.016195 kubelet[3206]: E0913 00:08:12.016172 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.016195 kubelet[3206]: W0913 00:08:12.016194 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.016424 kubelet[3206]: E0913 00:08:12.016213 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.016827 kubelet[3206]: E0913 00:08:12.016803 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.016827 kubelet[3206]: W0913 00:08:12.016825 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.017170 kubelet[3206]: E0913 00:08:12.017144 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.017548 kubelet[3206]: E0913 00:08:12.017522 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.017548 kubelet[3206]: W0913 00:08:12.017547 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.017845 kubelet[3206]: E0913 00:08:12.017567 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.018131 kubelet[3206]: E0913 00:08:12.018112 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.018131 kubelet[3206]: W0913 00:08:12.018130 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.018406 kubelet[3206]: E0913 00:08:12.018150 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.018673 kubelet[3206]: E0913 00:08:12.018646 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.018921 kubelet[3206]: W0913 00:08:12.018813 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.019067 kubelet[3206]: E0913 00:08:12.018968 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.019538 kubelet[3206]: E0913 00:08:12.019465 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.019538 kubelet[3206]: W0913 00:08:12.019479 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.019775 kubelet[3206]: E0913 00:08:12.019689 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.020573 kubelet[3206]: E0913 00:08:12.020420 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.020573 kubelet[3206]: W0913 00:08:12.020435 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.021275 kubelet[3206]: E0913 00:08:12.021245 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.021883 kubelet[3206]: E0913 00:08:12.021823 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.021883 kubelet[3206]: W0913 00:08:12.021842 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.022584 kubelet[3206]: E0913 00:08:12.022558 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.023629 kubelet[3206]: E0913 00:08:12.023608 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.023629 kubelet[3206]: W0913 00:08:12.023627 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.024040 kubelet[3206]: E0913 00:08:12.024016 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.025352 kubelet[3206]: E0913 00:08:12.025329 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.025352 kubelet[3206]: W0913 00:08:12.025347 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.025608 kubelet[3206]: E0913 00:08:12.025486 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.025671 kubelet[3206]: E0913 00:08:12.025665 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.025718 kubelet[3206]: W0913 00:08:12.025675 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.025931 kubelet[3206]: E0913 00:08:12.025830 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.025999 kubelet[3206]: E0913 00:08:12.025960 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.025999 kubelet[3206]: W0913 00:08:12.025971 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.028094 kubelet[3206]: E0913 00:08:12.026479 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.028094 kubelet[3206]: E0913 00:08:12.026841 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.028094 kubelet[3206]: W0913 00:08:12.026852 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.029474 kubelet[3206]: E0913 00:08:12.029441 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.029810 kubelet[3206]: E0913 00:08:12.029789 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.029899 kubelet[3206]: W0913 00:08:12.029815 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.029948 kubelet[3206]: E0913 00:08:12.029906 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.030160 kubelet[3206]: E0913 00:08:12.030142 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.030160 kubelet[3206]: W0913 00:08:12.030160 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.030283 kubelet[3206]: E0913 00:08:12.030248 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.031164 containerd[1972]: time="2025-09-13T00:08:12.031121166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jb9s9,Uid:fda06714-6718-433a-bbb4-051822ac6c81,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:12.031595 kubelet[3206]: E0913 00:08:12.031571 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.031595 kubelet[3206]: W0913 00:08:12.031595 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.031769 kubelet[3206]: E0913 00:08:12.031616 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.031914 kubelet[3206]: E0913 00:08:12.031882 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.031914 kubelet[3206]: W0913 00:08:12.031899 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.032043 kubelet[3206]: E0913 00:08:12.032025 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.032547 kubelet[3206]: E0913 00:08:12.032516 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.032547 kubelet[3206]: W0913 00:08:12.032534 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.032708 kubelet[3206]: E0913 00:08:12.032688 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.033087 kubelet[3206]: E0913 00:08:12.032956 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.033087 kubelet[3206]: W0913 00:08:12.032970 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.033087 kubelet[3206]: E0913 00:08:12.033036 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.033587 kubelet[3206]: E0913 00:08:12.033540 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.033587 kubelet[3206]: W0913 00:08:12.033553 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.033718 kubelet[3206]: E0913 00:08:12.033585 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.033976 kubelet[3206]: E0913 00:08:12.033867 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.033976 kubelet[3206]: W0913 00:08:12.033885 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.033976 kubelet[3206]: E0913 00:08:12.033898 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.048630 kubelet[3206]: E0913 00:08:12.048596 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:12.048630 kubelet[3206]: W0913 00:08:12.048625 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:12.048843 kubelet[3206]: E0913 00:08:12.048651 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:12.082442 containerd[1972]: time="2025-09-13T00:08:12.082027046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:12.082442 containerd[1972]: time="2025-09-13T00:08:12.082270042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:12.082650 containerd[1972]: time="2025-09-13T00:08:12.082454360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:12.083229 containerd[1972]: time="2025-09-13T00:08:12.082872388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:12.114745 systemd[1]: Started cri-containerd-69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8.scope - libcontainer container 69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8. Sep 13 00:08:12.292435 containerd[1972]: time="2025-09-13T00:08:12.291563034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jb9s9,Uid:fda06714-6718-433a-bbb4-051822ac6c81,Namespace:calico-system,Attempt:0,} returns sandbox id \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\"" Sep 13 00:08:13.367297 kubelet[3206]: E0913 00:08:13.366321 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:13.743991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131711608.mount: Deactivated successfully. Sep 13 00:08:14.919495 containerd[1972]: time="2025-09-13T00:08:14.918436863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:14.926514 containerd[1972]: time="2025-09-13T00:08:14.921241983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:08:14.929035 containerd[1972]: time="2025-09-13T00:08:14.928920524Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:14.933505 containerd[1972]: time="2025-09-13T00:08:14.933460870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:14.934071 containerd[1972]: time="2025-09-13T00:08:14.934033975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.057377284s" Sep 13 00:08:14.934071 containerd[1972]: time="2025-09-13T00:08:14.934071833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:08:14.936005 containerd[1972]: time="2025-09-13T00:08:14.935785908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:08:14.957790 containerd[1972]: time="2025-09-13T00:08:14.957740124Z" level=info msg="CreateContainer within sandbox \"b89862a4e29301a2d1702b3c47271ff9225f2590c85b0f884b902cfd4680bbed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:08:14.997916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357712226.mount: Deactivated successfully. Sep 13 00:08:15.003923 containerd[1972]: time="2025-09-13T00:08:15.003857423Z" level=info msg="CreateContainer within sandbox \"b89862a4e29301a2d1702b3c47271ff9225f2590c85b0f884b902cfd4680bbed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"558d4bcf99ac38cef70d7bc5dc609c1a47477cbef63670c542848ff5a9e0f60c\"" Sep 13 00:08:15.005123 containerd[1972]: time="2025-09-13T00:08:15.005081709Z" level=info msg="StartContainer for \"558d4bcf99ac38cef70d7bc5dc609c1a47477cbef63670c542848ff5a9e0f60c\"" Sep 13 00:08:15.074995 systemd[1]: Started cri-containerd-558d4bcf99ac38cef70d7bc5dc609c1a47477cbef63670c542848ff5a9e0f60c.scope - libcontainer container 558d4bcf99ac38cef70d7bc5dc609c1a47477cbef63670c542848ff5a9e0f60c. Sep 13 00:08:15.152240 containerd[1972]: time="2025-09-13T00:08:15.152057821Z" level=info msg="StartContainer for \"558d4bcf99ac38cef70d7bc5dc609c1a47477cbef63670c542848ff5a9e0f60c\" returns successfully" Sep 13 00:08:15.371309 kubelet[3206]: E0913 00:08:15.370835 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:15.525999 kubelet[3206]: E0913 00:08:15.525954 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.525999 kubelet[3206]: W0913 00:08:15.525990 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.526352 kubelet[3206]: E0913 00:08:15.526016 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.526892 kubelet[3206]: E0913 00:08:15.526866 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.526892 kubelet[3206]: W0913 00:08:15.526892 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.527345 kubelet[3206]: E0913 00:08:15.526915 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.527426 kubelet[3206]: E0913 00:08:15.527364 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.527426 kubelet[3206]: W0913 00:08:15.527377 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.527426 kubelet[3206]: E0913 00:08:15.527407 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.527929 kubelet[3206]: E0913 00:08:15.527905 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.527929 kubelet[3206]: W0913 00:08:15.527929 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.528250 kubelet[3206]: E0913 00:08:15.527944 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.529478 kubelet[3206]: E0913 00:08:15.529454 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.529576 kubelet[3206]: W0913 00:08:15.529480 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.529576 kubelet[3206]: E0913 00:08:15.529497 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.529825 kubelet[3206]: E0913 00:08:15.529784 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.529825 kubelet[3206]: W0913 00:08:15.529795 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.529825 kubelet[3206]: E0913 00:08:15.529810 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.530092 kubelet[3206]: E0913 00:08:15.530075 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.530092 kubelet[3206]: W0913 00:08:15.530092 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.530339 kubelet[3206]: E0913 00:08:15.530105 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.530434 kubelet[3206]: E0913 00:08:15.530365 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.530434 kubelet[3206]: W0913 00:08:15.530376 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.530434 kubelet[3206]: E0913 00:08:15.530422 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.532462 kubelet[3206]: E0913 00:08:15.532437 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.532462 kubelet[3206]: W0913 00:08:15.532461 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.532600 kubelet[3206]: E0913 00:08:15.532479 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.532804 kubelet[3206]: E0913 00:08:15.532780 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.532804 kubelet[3206]: W0913 00:08:15.532792 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.532897 kubelet[3206]: E0913 00:08:15.532807 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.533064 kubelet[3206]: E0913 00:08:15.533047 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.533129 kubelet[3206]: W0913 00:08:15.533064 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.533129 kubelet[3206]: E0913 00:08:15.533077 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.533375 kubelet[3206]: E0913 00:08:15.533358 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.533475 kubelet[3206]: W0913 00:08:15.533377 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.533475 kubelet[3206]: E0913 00:08:15.533411 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.533671 kubelet[3206]: E0913 00:08:15.533655 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.533816 kubelet[3206]: W0913 00:08:15.533671 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.533816 kubelet[3206]: E0913 00:08:15.533684 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.533914 kubelet[3206]: E0913 00:08:15.533901 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.533914 kubelet[3206]: W0913 00:08:15.533911 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.534011 kubelet[3206]: E0913 00:08:15.533924 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.534420 kubelet[3206]: E0913 00:08:15.534140 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.534420 kubelet[3206]: W0913 00:08:15.534162 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.534420 kubelet[3206]: E0913 00:08:15.534174 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.552943 kubelet[3206]: E0913 00:08:15.552817 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.552943 kubelet[3206]: W0913 00:08:15.552843 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.552943 kubelet[3206]: E0913 00:08:15.552875 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.553423 kubelet[3206]: E0913 00:08:15.553258 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.553423 kubelet[3206]: W0913 00:08:15.553273 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.554515 kubelet[3206]: E0913 00:08:15.554487 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.555409 kubelet[3206]: E0913 00:08:15.554841 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.555409 kubelet[3206]: W0913 00:08:15.554857 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.555409 kubelet[3206]: E0913 00:08:15.554875 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.555409 kubelet[3206]: E0913 00:08:15.555145 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.555409 kubelet[3206]: W0913 00:08:15.555156 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.555409 kubelet[3206]: E0913 00:08:15.555184 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.556211 kubelet[3206]: E0913 00:08:15.555465 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.556211 kubelet[3206]: W0913 00:08:15.555476 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.556211 kubelet[3206]: E0913 00:08:15.555525 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.556515 kubelet[3206]: E0913 00:08:15.556494 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.556601 kubelet[3206]: W0913 00:08:15.556514 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.556656 kubelet[3206]: E0913 00:08:15.556602 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.557007 kubelet[3206]: E0913 00:08:15.556981 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.557007 kubelet[3206]: W0913 00:08:15.557004 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.559572 kubelet[3206]: E0913 00:08:15.559544 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.559762 kubelet[3206]: E0913 00:08:15.559743 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.559851 kubelet[3206]: W0913 00:08:15.559762 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.559912 kubelet[3206]: E0913 00:08:15.559853 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.560206 kubelet[3206]: E0913 00:08:15.560067 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.560206 kubelet[3206]: W0913 00:08:15.560080 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.560317 kubelet[3206]: E0913 00:08:15.560207 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.561075 kubelet[3206]: E0913 00:08:15.560572 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.561075 kubelet[3206]: W0913 00:08:15.560585 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.561075 kubelet[3206]: E0913 00:08:15.560603 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.561075 kubelet[3206]: E0913 00:08:15.560862 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.561075 kubelet[3206]: W0913 00:08:15.560872 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.561075 kubelet[3206]: E0913 00:08:15.560897 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.561420 kubelet[3206]: E0913 00:08:15.561120 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.561420 kubelet[3206]: W0913 00:08:15.561130 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.561420 kubelet[3206]: E0913 00:08:15.561212 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.561420 kubelet[3206]: E0913 00:08:15.561414 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.561631 kubelet[3206]: W0913 00:08:15.561424 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.561631 kubelet[3206]: E0913 00:08:15.561440 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.561731 kubelet[3206]: E0913 00:08:15.561670 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.561731 kubelet[3206]: W0913 00:08:15.561679 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.561731 kubelet[3206]: E0913 00:08:15.561705 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.561980 kubelet[3206]: E0913 00:08:15.561960 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.561980 kubelet[3206]: W0913 00:08:15.561980 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.562094 kubelet[3206]: E0913 00:08:15.562011 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.564692 kubelet[3206]: E0913 00:08:15.564666 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.564692 kubelet[3206]: W0913 00:08:15.564689 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.564839 kubelet[3206]: E0913 00:08:15.564710 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.565598 kubelet[3206]: E0913 00:08:15.565543 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.565598 kubelet[3206]: W0913 00:08:15.565557 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.565726 kubelet[3206]: E0913 00:08:15.565651 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:15.565929 kubelet[3206]: E0913 00:08:15.565886 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:08:15.565999 kubelet[3206]: W0913 00:08:15.565931 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:08:15.565999 kubelet[3206]: E0913 00:08:15.565947 3206 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:08:16.270133 containerd[1972]: time="2025-09-13T00:08:16.270068654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:16.272061 containerd[1972]: time="2025-09-13T00:08:16.271998581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:08:16.274426 containerd[1972]: time="2025-09-13T00:08:16.274181050Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:16.277796 containerd[1972]: time="2025-09-13T00:08:16.277706882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:16.278225 containerd[1972]: time="2025-09-13T00:08:16.278193415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.342369756s" Sep 13 00:08:16.278446 containerd[1972]: time="2025-09-13T00:08:16.278232006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:08:16.281586 containerd[1972]: time="2025-09-13T00:08:16.281544082Z" level=info msg="CreateContainer within sandbox \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:08:16.323829 containerd[1972]: time="2025-09-13T00:08:16.323762975Z" level=info msg="CreateContainer within sandbox \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0\"" Sep 13 00:08:16.324751 containerd[1972]: time="2025-09-13T00:08:16.324716056Z" level=info msg="StartContainer for \"835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0\"" Sep 13 00:08:16.369174 systemd[1]: run-containerd-runc-k8s.io-835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0-runc.3eiQ8k.mount: Deactivated successfully. Sep 13 00:08:16.378646 systemd[1]: Started cri-containerd-835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0.scope - libcontainer container 835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0. Sep 13 00:08:16.425005 containerd[1972]: time="2025-09-13T00:08:16.424695430Z" level=info msg="StartContainer for \"835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0\" returns successfully" Sep 13 00:08:16.454228 systemd[1]: cri-containerd-835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0.scope: Deactivated successfully. Sep 13 00:08:16.509873 kubelet[3206]: I0913 00:08:16.509095 3206 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:16.527182 kubelet[3206]: I0913 00:08:16.526329 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-89b6cc9fd-7trjz" podStartSLOduration=2.431396056 podStartE2EDuration="5.526307533s" podCreationTimestamp="2025-09-13 00:08:11 +0000 UTC" firstStartedPulling="2025-09-13 00:08:11.840373587 +0000 UTC m=+20.632068555" lastFinishedPulling="2025-09-13 00:08:14.935285064 +0000 UTC m=+23.726980032" observedRunningTime="2025-09-13 00:08:15.568679049 +0000 UTC m=+24.360374039" watchObservedRunningTime="2025-09-13 00:08:16.526307533 +0000 UTC m=+25.318002534" Sep 13 00:08:16.724196 containerd[1972]: time="2025-09-13T00:08:16.688965089Z" level=info msg="shim disconnected" id=835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0 namespace=k8s.io Sep 13 00:08:16.724196 containerd[1972]: time="2025-09-13T00:08:16.724190105Z" level=warning msg="cleaning up after shim disconnected" id=835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0 namespace=k8s.io Sep 13 00:08:16.724529 containerd[1972]: time="2025-09-13T00:08:16.724216865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:16.945782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-835f450c9fc097aa608bf6e34787ed5b3a998245750f83e4d2a4b630ec8767e0-rootfs.mount: Deactivated successfully. Sep 13 00:08:17.366536 kubelet[3206]: E0913 00:08:17.366401 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:17.514590 containerd[1972]: time="2025-09-13T00:08:17.514531446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:08:19.367144 kubelet[3206]: E0913 00:08:19.367057 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:20.631412 containerd[1972]: time="2025-09-13T00:08:20.629800893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:20.632715 containerd[1972]: time="2025-09-13T00:08:20.632526248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:08:20.635203 containerd[1972]: time="2025-09-13T00:08:20.634890911Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:20.639892 containerd[1972]: time="2025-09-13T00:08:20.638535125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:20.639892 containerd[1972]: time="2025-09-13T00:08:20.639411674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.124823008s" Sep 13 00:08:20.639892 containerd[1972]: time="2025-09-13T00:08:20.639443189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:08:20.644627 containerd[1972]: time="2025-09-13T00:08:20.644589206Z" level=info msg="CreateContainer within sandbox \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:08:20.674411 containerd[1972]: time="2025-09-13T00:08:20.674339122Z" level=info msg="CreateContainer within sandbox \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b\"" Sep 13 00:08:20.676238 containerd[1972]: time="2025-09-13T00:08:20.674888713Z" level=info msg="StartContainer for \"e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b\"" Sep 13 00:08:20.721636 systemd[1]: Started cri-containerd-e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b.scope - libcontainer container e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b. Sep 13 00:08:20.765305 containerd[1972]: time="2025-09-13T00:08:20.765158328Z" level=info msg="StartContainer for \"e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b\" returns successfully" Sep 13 00:08:21.367244 kubelet[3206]: E0913 00:08:21.366100 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:22.079466 systemd[1]: cri-containerd-e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b.scope: Deactivated successfully. Sep 13 00:08:22.124413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b-rootfs.mount: Deactivated successfully. Sep 13 00:08:22.133928 containerd[1972]: time="2025-09-13T00:08:22.133842463Z" level=info msg="shim disconnected" id=e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b namespace=k8s.io Sep 13 00:08:22.133928 containerd[1972]: time="2025-09-13T00:08:22.133917424Z" level=warning msg="cleaning up after shim disconnected" id=e8b782516ca8fa4ef82cc4e9dcd9bc7168ad1378a8343b290dfd545bd0c0dd1b namespace=k8s.io Sep 13 00:08:22.133928 containerd[1972]: time="2025-09-13T00:08:22.133929444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:08:22.165902 kubelet[3206]: I0913 00:08:22.165875 3206 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:08:22.212095 kubelet[3206]: I0913 00:08:22.212064 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ce4e15f-f6d2-489b-b74d-a800f2ee80ad-config-volume\") pod \"coredns-668d6bf9bc-qmtk8\" (UID: \"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad\") " pod="kube-system/coredns-668d6bf9bc-qmtk8" Sep 13 00:08:22.212095 kubelet[3206]: I0913 00:08:22.212104 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg645\" (UniqueName: \"kubernetes.io/projected/2ce4e15f-f6d2-489b-b74d-a800f2ee80ad-kube-api-access-qg645\") pod \"coredns-668d6bf9bc-qmtk8\" (UID: \"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad\") " pod="kube-system/coredns-668d6bf9bc-qmtk8" Sep 13 00:08:22.213540 kubelet[3206]: I0913 00:08:22.212946 3206 status_manager.go:890] "Failed to get status for pod" podUID="2ce4e15f-f6d2-489b-b74d-a800f2ee80ad" pod="kube-system/coredns-668d6bf9bc-qmtk8" err="pods \"coredns-668d6bf9bc-qmtk8\" is forbidden: User \"system:node:ip-172-31-16-22\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-22' and this object" Sep 13 00:08:22.224586 systemd[1]: Created slice kubepods-burstable-pod2ce4e15f_f6d2_489b_b74d_a800f2ee80ad.slice - libcontainer container kubepods-burstable-pod2ce4e15f_f6d2_489b_b74d_a800f2ee80ad.slice. Sep 13 00:08:22.260532 systemd[1]: Created slice kubepods-burstable-pod790eb649_ffde_4aab_abe6_6ba328fbc032.slice - libcontainer container kubepods-burstable-pod790eb649_ffde_4aab_abe6_6ba328fbc032.slice. Sep 13 00:08:22.273191 systemd[1]: Created slice kubepods-besteffort-pod48dee0db_351b_4286_ac8b_c0ef5144392c.slice - libcontainer container kubepods-besteffort-pod48dee0db_351b_4286_ac8b_c0ef5144392c.slice. Sep 13 00:08:22.280903 systemd[1]: Created slice kubepods-besteffort-pod0c763216_54a0_463c_943a_eb519ffb6816.slice - libcontainer container kubepods-besteffort-pod0c763216_54a0_463c_943a_eb519ffb6816.slice. Sep 13 00:08:22.288703 systemd[1]: Created slice kubepods-besteffort-podc150b87b_3909_4283_ab15_dcc8b6a0c68d.slice - libcontainer container kubepods-besteffort-podc150b87b_3909_4283_ab15_dcc8b6a0c68d.slice. Sep 13 00:08:22.296771 systemd[1]: Created slice kubepods-besteffort-pod82bd0e8e_02c0_47c6_a6c0_c61583a0d7d0.slice - libcontainer container kubepods-besteffort-pod82bd0e8e_02c0_47c6_a6c0_c61583a0d7d0.slice. Sep 13 00:08:22.307903 systemd[1]: Created slice kubepods-besteffort-pod1c5637eb_c80d_4a9c_92f1_4b8bb3195348.slice - libcontainer container kubepods-besteffort-pod1c5637eb_c80d_4a9c_92f1_4b8bb3195348.slice. Sep 13 00:08:22.314245 kubelet[3206]: I0913 00:08:22.313539 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/790eb649-ffde-4aab-abe6-6ba328fbc032-config-volume\") pod \"coredns-668d6bf9bc-dhbhm\" (UID: \"790eb649-ffde-4aab-abe6-6ba328fbc032\") " pod="kube-system/coredns-668d6bf9bc-dhbhm" Sep 13 00:08:22.415088 kubelet[3206]: I0913 00:08:22.414365 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cl8\" (UniqueName: \"kubernetes.io/projected/790eb649-ffde-4aab-abe6-6ba328fbc032-kube-api-access-q2cl8\") pod \"coredns-668d6bf9bc-dhbhm\" (UID: \"790eb649-ffde-4aab-abe6-6ba328fbc032\") " pod="kube-system/coredns-668d6bf9bc-dhbhm" Sep 13 00:08:22.415088 kubelet[3206]: I0913 00:08:22.414475 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-ca-bundle\") pod \"whisker-8c785d7cc-7x628\" (UID: \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\") " pod="calico-system/whisker-8c785d7cc-7x628" Sep 13 00:08:22.415088 kubelet[3206]: I0913 00:08:22.414511 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4gp\" (UniqueName: \"kubernetes.io/projected/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-kube-api-access-zs4gp\") pod \"whisker-8c785d7cc-7x628\" (UID: \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\") " pod="calico-system/whisker-8c785d7cc-7x628" Sep 13 00:08:22.415088 kubelet[3206]: I0913 00:08:22.414529 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c763216-54a0-463c-943a-eb519ffb6816-calico-apiserver-certs\") pod \"calico-apiserver-6c7768f9b8-rx6mc\" (UID: \"0c763216-54a0-463c-943a-eb519ffb6816\") " pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" Sep 13 00:08:22.415088 kubelet[3206]: I0913 00:08:22.414544 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48dee0db-351b-4286-ac8b-c0ef5144392c-tigera-ca-bundle\") pod \"calico-kube-controllers-86d996465f-bdr2b\" (UID: \"48dee0db-351b-4286-ac8b-c0ef5144392c\") " pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" Sep 13 00:08:22.415672 kubelet[3206]: I0913 00:08:22.414560 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wmfn\" (UniqueName: \"kubernetes.io/projected/48dee0db-351b-4286-ac8b-c0ef5144392c-kube-api-access-5wmfn\") pod \"calico-kube-controllers-86d996465f-bdr2b\" (UID: \"48dee0db-351b-4286-ac8b-c0ef5144392c\") " pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" Sep 13 00:08:22.415672 kubelet[3206]: I0913 00:08:22.414594 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-backend-key-pair\") pod \"whisker-8c785d7cc-7x628\" (UID: \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\") " pod="calico-system/whisker-8c785d7cc-7x628" Sep 13 00:08:22.415672 kubelet[3206]: I0913 00:08:22.414620 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c150b87b-3909-4283-ab15-dcc8b6a0c68d-config\") pod \"goldmane-54d579b49d-m25gq\" (UID: \"c150b87b-3909-4283-ab15-dcc8b6a0c68d\") " pod="calico-system/goldmane-54d579b49d-m25gq" Sep 13 00:08:22.415672 kubelet[3206]: I0913 00:08:22.414637 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4klq\" (UniqueName: \"kubernetes.io/projected/0c763216-54a0-463c-943a-eb519ffb6816-kube-api-access-w4klq\") pod \"calico-apiserver-6c7768f9b8-rx6mc\" (UID: \"0c763216-54a0-463c-943a-eb519ffb6816\") " pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" Sep 13 00:08:22.415672 kubelet[3206]: I0913 00:08:22.414658 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzt8p\" (UniqueName: \"kubernetes.io/projected/1c5637eb-c80d-4a9c-92f1-4b8bb3195348-kube-api-access-wzt8p\") pod \"calico-apiserver-6c7768f9b8-wmtxh\" (UID: \"1c5637eb-c80d-4a9c-92f1-4b8bb3195348\") " pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" Sep 13 00:08:22.415821 kubelet[3206]: I0913 00:08:22.414685 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwfb\" (UniqueName: \"kubernetes.io/projected/c150b87b-3909-4283-ab15-dcc8b6a0c68d-kube-api-access-sfwfb\") pod \"goldmane-54d579b49d-m25gq\" (UID: \"c150b87b-3909-4283-ab15-dcc8b6a0c68d\") " pod="calico-system/goldmane-54d579b49d-m25gq" Sep 13 00:08:22.415821 kubelet[3206]: I0913 00:08:22.414720 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c5637eb-c80d-4a9c-92f1-4b8bb3195348-calico-apiserver-certs\") pod \"calico-apiserver-6c7768f9b8-wmtxh\" (UID: \"1c5637eb-c80d-4a9c-92f1-4b8bb3195348\") " pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" Sep 13 00:08:22.415821 kubelet[3206]: I0913 00:08:22.414736 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c150b87b-3909-4283-ab15-dcc8b6a0c68d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-m25gq\" (UID: \"c150b87b-3909-4283-ab15-dcc8b6a0c68d\") " pod="calico-system/goldmane-54d579b49d-m25gq" Sep 13 00:08:22.415821 kubelet[3206]: I0913 00:08:22.414754 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c150b87b-3909-4283-ab15-dcc8b6a0c68d-goldmane-key-pair\") pod \"goldmane-54d579b49d-m25gq\" (UID: \"c150b87b-3909-4283-ab15-dcc8b6a0c68d\") " pod="calico-system/goldmane-54d579b49d-m25gq" Sep 13 00:08:22.542423 containerd[1972]: time="2025-09-13T00:08:22.540738908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:08:22.557095 containerd[1972]: time="2025-09-13T00:08:22.557012616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qmtk8,Uid:2ce4e15f-f6d2-489b-b74d-a800f2ee80ad,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:22.598614 containerd[1972]: time="2025-09-13T00:08:22.598577670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-m25gq,Uid:c150b87b-3909-4283-ab15-dcc8b6a0c68d,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:22.600628 containerd[1972]: time="2025-09-13T00:08:22.600004951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d996465f-bdr2b,Uid:48dee0db-351b-4286-ac8b-c0ef5144392c,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:22.600628 containerd[1972]: time="2025-09-13T00:08:22.600421788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-rx6mc,Uid:0c763216-54a0-463c-943a-eb519ffb6816,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:08:22.606719 containerd[1972]: time="2025-09-13T00:08:22.606657650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c785d7cc-7x628,Uid:82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:22.613523 containerd[1972]: time="2025-09-13T00:08:22.612931916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-wmtxh,Uid:1c5637eb-c80d-4a9c-92f1-4b8bb3195348,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:08:22.870416 containerd[1972]: time="2025-09-13T00:08:22.869451606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhbhm,Uid:790eb649-ffde-4aab-abe6-6ba328fbc032,Namespace:kube-system,Attempt:0,}" Sep 13 00:08:23.146663 containerd[1972]: time="2025-09-13T00:08:23.144109834Z" level=error msg="Failed to destroy network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.155465 containerd[1972]: time="2025-09-13T00:08:23.151942361Z" level=error msg="encountered an error cleaning up failed sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.155736 containerd[1972]: time="2025-09-13T00:08:23.155694970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-wmtxh,Uid:1c5637eb-c80d-4a9c-92f1-4b8bb3195348,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.177537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec-shm.mount: Deactivated successfully. Sep 13 00:08:23.178200 containerd[1972]: time="2025-09-13T00:08:23.178124098Z" level=error msg="Failed to destroy network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.185495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e-shm.mount: Deactivated successfully. Sep 13 00:08:23.190371 containerd[1972]: time="2025-09-13T00:08:23.186278076Z" level=error msg="encountered an error cleaning up failed sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.190371 containerd[1972]: time="2025-09-13T00:08:23.188824943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-m25gq,Uid:c150b87b-3909-4283-ab15-dcc8b6a0c68d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.200520 containerd[1972]: time="2025-09-13T00:08:23.200452837Z" level=error msg="Failed to destroy network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.201532 containerd[1972]: time="2025-09-13T00:08:23.200928329Z" level=error msg="encountered an error cleaning up failed sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.201532 containerd[1972]: time="2025-09-13T00:08:23.200996465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-rx6mc,Uid:0c763216-54a0-463c-943a-eb519ffb6816,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.201532 containerd[1972]: time="2025-09-13T00:08:23.201179182Z" level=error msg="Failed to destroy network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.207151 kubelet[3206]: E0913 00:08:23.204534 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.207151 kubelet[3206]: E0913 00:08:23.204633 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" Sep 13 00:08:23.207151 kubelet[3206]: E0913 00:08:23.204665 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" Sep 13 00:08:23.206104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a-shm.mount: Deactivated successfully. Sep 13 00:08:23.208703 kubelet[3206]: E0913 00:08:23.204719 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7768f9b8-wmtxh_calico-apiserver(1c5637eb-c80d-4a9c-92f1-4b8bb3195348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7768f9b8-wmtxh_calico-apiserver(1c5637eb-c80d-4a9c-92f1-4b8bb3195348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" podUID="1c5637eb-c80d-4a9c-92f1-4b8bb3195348" Sep 13 00:08:23.208703 kubelet[3206]: E0913 00:08:23.205576 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.208703 kubelet[3206]: E0913 00:08:23.205737 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" Sep 13 00:08:23.206632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716-shm.mount: Deactivated successfully. Sep 13 00:08:23.209841 containerd[1972]: time="2025-09-13T00:08:23.209168064Z" level=error msg="encountered an error cleaning up failed sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.209916 kubelet[3206]: E0913 00:08:23.205781 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" Sep 13 00:08:23.209916 kubelet[3206]: E0913 00:08:23.206147 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.209916 kubelet[3206]: E0913 00:08:23.206311 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7768f9b8-rx6mc_calico-apiserver(0c763216-54a0-463c-943a-eb519ffb6816)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7768f9b8-rx6mc_calico-apiserver(0c763216-54a0-463c-943a-eb519ffb6816)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" podUID="0c763216-54a0-463c-943a-eb519ffb6816" Sep 13 00:08:23.210157 kubelet[3206]: E0913 00:08:23.206583 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-m25gq" Sep 13 00:08:23.210157 kubelet[3206]: E0913 00:08:23.206734 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-m25gq" Sep 13 00:08:23.210157 kubelet[3206]: E0913 00:08:23.206916 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-m25gq_calico-system(c150b87b-3909-4283-ab15-dcc8b6a0c68d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-m25gq_calico-system(c150b87b-3909-4283-ab15-dcc8b6a0c68d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-m25gq" podUID="c150b87b-3909-4283-ab15-dcc8b6a0c68d" Sep 13 00:08:23.210345 containerd[1972]: time="2025-09-13T00:08:23.210253461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d996465f-bdr2b,Uid:48dee0db-351b-4286-ac8b-c0ef5144392c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.216129 kubelet[3206]: E0913 00:08:23.211846 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.216129 kubelet[3206]: E0913 00:08:23.212233 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" Sep 13 00:08:23.216129 kubelet[3206]: E0913 00:08:23.212268 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" Sep 13 00:08:23.216864 kubelet[3206]: E0913 00:08:23.214877 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d996465f-bdr2b_calico-system(48dee0db-351b-4286-ac8b-c0ef5144392c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d996465f-bdr2b_calico-system(48dee0db-351b-4286-ac8b-c0ef5144392c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" podUID="48dee0db-351b-4286-ac8b-c0ef5144392c" Sep 13 00:08:23.222806 containerd[1972]: time="2025-09-13T00:08:23.222728752Z" level=error msg="Failed to destroy network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.223645 containerd[1972]: time="2025-09-13T00:08:23.223592677Z" level=error msg="encountered an error cleaning up failed sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.223856 containerd[1972]: time="2025-09-13T00:08:23.223804513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c785d7cc-7x628,Uid:82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.225153 kubelet[3206]: E0913 00:08:23.225095 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.225484 kubelet[3206]: E0913 00:08:23.225340 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c785d7cc-7x628" Sep 13 00:08:23.225661 kubelet[3206]: E0913 00:08:23.225568 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8c785d7cc-7x628" Sep 13 00:08:23.225976 kubelet[3206]: E0913 00:08:23.225913 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8c785d7cc-7x628_calico-system(82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8c785d7cc-7x628_calico-system(82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8c785d7cc-7x628" podUID="82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0" Sep 13 00:08:23.245919 containerd[1972]: time="2025-09-13T00:08:23.245846535Z" level=error msg="Failed to destroy network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.246839 containerd[1972]: time="2025-09-13T00:08:23.246762526Z" level=error msg="encountered an error cleaning up failed sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.247093 containerd[1972]: time="2025-09-13T00:08:23.246980049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qmtk8,Uid:2ce4e15f-f6d2-489b-b74d-a800f2ee80ad,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.247633 kubelet[3206]: E0913 00:08:23.247564 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.247827 kubelet[3206]: E0913 00:08:23.247719 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qmtk8" Sep 13 00:08:23.248239 kubelet[3206]: E0913 00:08:23.247917 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qmtk8" Sep 13 00:08:23.248475 kubelet[3206]: E0913 00:08:23.248441 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qmtk8_kube-system(2ce4e15f-f6d2-489b-b74d-a800f2ee80ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qmtk8_kube-system(2ce4e15f-f6d2-489b-b74d-a800f2ee80ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qmtk8" podUID="2ce4e15f-f6d2-489b-b74d-a800f2ee80ad" Sep 13 00:08:23.259076 containerd[1972]: time="2025-09-13T00:08:23.259002795Z" level=error msg="Failed to destroy network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.259506 containerd[1972]: time="2025-09-13T00:08:23.259458555Z" level=error msg="encountered an error cleaning up failed sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.259664 containerd[1972]: time="2025-09-13T00:08:23.259530342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhbhm,Uid:790eb649-ffde-4aab-abe6-6ba328fbc032,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.259986 kubelet[3206]: E0913 00:08:23.259943 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.260072 kubelet[3206]: E0913 00:08:23.260012 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dhbhm" Sep 13 00:08:23.260072 kubelet[3206]: E0913 00:08:23.260043 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dhbhm" Sep 13 00:08:23.260165 kubelet[3206]: E0913 00:08:23.260099 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dhbhm_kube-system(790eb649-ffde-4aab-abe6-6ba328fbc032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dhbhm_kube-system(790eb649-ffde-4aab-abe6-6ba328fbc032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dhbhm" podUID="790eb649-ffde-4aab-abe6-6ba328fbc032" Sep 13 00:08:23.376693 systemd[1]: Created slice kubepods-besteffort-podd46336d8_eb49_44a4_a6a0_97396eeb5284.slice - libcontainer container kubepods-besteffort-podd46336d8_eb49_44a4_a6a0_97396eeb5284.slice. Sep 13 00:08:23.380556 containerd[1972]: time="2025-09-13T00:08:23.380273982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2hfvw,Uid:d46336d8-eb49-44a4-a6a0-97396eeb5284,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:23.449060 containerd[1972]: time="2025-09-13T00:08:23.448984794Z" level=error msg="Failed to destroy network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.449324 containerd[1972]: time="2025-09-13T00:08:23.449287833Z" level=error msg="encountered an error cleaning up failed sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.449435 containerd[1972]: time="2025-09-13T00:08:23.449346658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2hfvw,Uid:d46336d8-eb49-44a4-a6a0-97396eeb5284,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.449612 kubelet[3206]: E0913 00:08:23.449574 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.449917 kubelet[3206]: E0913 00:08:23.449630 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:23.449917 kubelet[3206]: E0913 00:08:23.449650 3206 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2hfvw" Sep 13 00:08:23.449917 kubelet[3206]: E0913 00:08:23.449690 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2hfvw_calico-system(d46336d8-eb49-44a4-a6a0-97396eeb5284)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2hfvw_calico-system(d46336d8-eb49-44a4-a6a0-97396eeb5284)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:23.540747 kubelet[3206]: I0913 00:08:23.540683 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:23.544908 kubelet[3206]: I0913 00:08:23.544371 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:23.577483 kubelet[3206]: I0913 00:08:23.577013 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:23.578933 kubelet[3206]: I0913 00:08:23.578906 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:23.582429 kubelet[3206]: I0913 00:08:23.582083 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:23.595672 kubelet[3206]: I0913 00:08:23.595472 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:23.603695 kubelet[3206]: I0913 00:08:23.603247 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:23.608827 kubelet[3206]: I0913 00:08:23.608799 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:23.629881 containerd[1972]: time="2025-09-13T00:08:23.629546393Z" level=info msg="StopPodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\"" Sep 13 00:08:23.633768 containerd[1972]: time="2025-09-13T00:08:23.632652660Z" level=info msg="StopPodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\"" Sep 13 00:08:23.635955 containerd[1972]: time="2025-09-13T00:08:23.635897392Z" level=info msg="StopPodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\"" Sep 13 00:08:23.636161 containerd[1972]: time="2025-09-13T00:08:23.636139430Z" level=info msg="StopPodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\"" Sep 13 00:08:23.636255 containerd[1972]: time="2025-09-13T00:08:23.636227300Z" level=info msg="Ensure that sandbox 0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92 in task-service has been cleanup successfully" Sep 13 00:08:23.636495 containerd[1972]: time="2025-09-13T00:08:23.636470854Z" level=info msg="Ensure that sandbox 29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec in task-service has been cleanup successfully" Sep 13 00:08:23.641097 containerd[1972]: time="2025-09-13T00:08:23.635928932Z" level=info msg="StopPodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\"" Sep 13 00:08:23.641327 containerd[1972]: time="2025-09-13T00:08:23.641288484Z" level=info msg="StopPodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\"" Sep 13 00:08:23.642508 containerd[1972]: time="2025-09-13T00:08:23.642468694Z" level=info msg="Ensure that sandbox 3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159 in task-service has been cleanup successfully" Sep 13 00:08:23.642972 containerd[1972]: time="2025-09-13T00:08:23.636154658Z" level=info msg="Ensure that sandbox 3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e in task-service has been cleanup successfully" Sep 13 00:08:23.645256 containerd[1972]: time="2025-09-13T00:08:23.642296901Z" level=info msg="Ensure that sandbox e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069 in task-service has been cleanup successfully" Sep 13 00:08:23.646046 containerd[1972]: time="2025-09-13T00:08:23.636197870Z" level=info msg="StopPodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\"" Sep 13 00:08:23.646266 containerd[1972]: time="2025-09-13T00:08:23.646238751Z" level=info msg="Ensure that sandbox 2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59 in task-service has been cleanup successfully" Sep 13 00:08:23.648252 containerd[1972]: time="2025-09-13T00:08:23.635897395Z" level=info msg="StopPodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\"" Sep 13 00:08:23.648578 containerd[1972]: time="2025-09-13T00:08:23.648546469Z" level=info msg="Ensure that sandbox aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a in task-service has been cleanup successfully" Sep 13 00:08:23.648746 containerd[1972]: time="2025-09-13T00:08:23.636223913Z" level=info msg="Ensure that sandbox 3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716 in task-service has been cleanup successfully" Sep 13 00:08:23.879500 containerd[1972]: time="2025-09-13T00:08:23.878963809Z" level=error msg="StopPodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" failed" error="failed to destroy network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.879500 containerd[1972]: time="2025-09-13T00:08:23.879150071Z" level=error msg="StopPodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" failed" error="failed to destroy network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.885106 containerd[1972]: time="2025-09-13T00:08:23.885044457Z" level=error msg="StopPodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" failed" error="failed to destroy network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.892667 kubelet[3206]: E0913 00:08:23.892450 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:23.892667 kubelet[3206]: E0913 00:08:23.891788 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:23.893019 containerd[1972]: time="2025-09-13T00:08:23.892859132Z" level=error msg="StopPodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" failed" error="failed to destroy network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.893019 containerd[1972]: time="2025-09-13T00:08:23.892971725Z" level=error msg="StopPodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" failed" error="failed to destroy network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.895903 containerd[1972]: time="2025-09-13T00:08:23.893142247Z" level=error msg="StopPodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" failed" error="failed to destroy network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.895903 containerd[1972]: time="2025-09-13T00:08:23.893231745Z" level=error msg="StopPodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" failed" error="failed to destroy network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.900225 containerd[1972]: time="2025-09-13T00:08:23.900176419Z" level=error msg="StopPodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" failed" error="failed to destroy network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:23.905806 kubelet[3206]: E0913 00:08:23.892524 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59"} Sep 13 00:08:23.905806 kubelet[3206]: E0913 00:08:23.904330 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.905806 kubelet[3206]: E0913 00:08:23.904427 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8c785d7cc-7x628" podUID="82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0" Sep 13 00:08:23.905806 kubelet[3206]: E0913 00:08:23.904599 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:23.905806 kubelet[3206]: E0913 00:08:23.904678 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec"} Sep 13 00:08:23.906230 kubelet[3206]: E0913 00:08:23.904730 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c5637eb-c80d-4a9c-92f1-4b8bb3195348\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.906230 kubelet[3206]: E0913 00:08:23.904757 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c5637eb-c80d-4a9c-92f1-4b8bb3195348\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" podUID="1c5637eb-c80d-4a9c-92f1-4b8bb3195348" Sep 13 00:08:23.906230 kubelet[3206]: E0913 00:08:23.904788 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:23.906230 kubelet[3206]: E0913 00:08:23.904810 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159"} Sep 13 00:08:23.906506 kubelet[3206]: E0913 00:08:23.904837 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"790eb649-ffde-4aab-abe6-6ba328fbc032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.906506 kubelet[3206]: E0913 00:08:23.904862 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"790eb649-ffde-4aab-abe6-6ba328fbc032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dhbhm" podUID="790eb649-ffde-4aab-abe6-6ba328fbc032" Sep 13 00:08:23.906506 kubelet[3206]: E0913 00:08:23.904914 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:23.906506 kubelet[3206]: E0913 00:08:23.904937 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e"} Sep 13 00:08:23.906800 kubelet[3206]: E0913 00:08:23.904968 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c150b87b-3909-4283-ab15-dcc8b6a0c68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.906800 kubelet[3206]: E0913 00:08:23.904995 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c150b87b-3909-4283-ab15-dcc8b6a0c68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-m25gq" podUID="c150b87b-3909-4283-ab15-dcc8b6a0c68d" Sep 13 00:08:23.906800 kubelet[3206]: E0913 00:08:23.905027 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:23.906800 kubelet[3206]: E0913 00:08:23.905051 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716"} Sep 13 00:08:23.907520 kubelet[3206]: E0913 00:08:23.905077 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48dee0db-351b-4286-ac8b-c0ef5144392c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.907520 kubelet[3206]: E0913 00:08:23.905107 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48dee0db-351b-4286-ac8b-c0ef5144392c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" podUID="48dee0db-351b-4286-ac8b-c0ef5144392c" Sep 13 00:08:23.907520 kubelet[3206]: E0913 00:08:23.905133 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:23.907520 kubelet[3206]: E0913 00:08:23.905154 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92"} Sep 13 00:08:23.907767 kubelet[3206]: E0913 00:08:23.905181 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d46336d8-eb49-44a4-a6a0-97396eeb5284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.907767 kubelet[3206]: E0913 00:08:23.905207 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d46336d8-eb49-44a4-a6a0-97396eeb5284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2hfvw" podUID="d46336d8-eb49-44a4-a6a0-97396eeb5284" Sep 13 00:08:23.907767 kubelet[3206]: E0913 00:08:23.905238 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:23.907767 kubelet[3206]: E0913 00:08:23.905257 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a"} Sep 13 00:08:23.908005 kubelet[3206]: E0913 00:08:23.905284 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c763216-54a0-463c-943a-eb519ffb6816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.908005 kubelet[3206]: E0913 00:08:23.905309 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c763216-54a0-463c-943a-eb519ffb6816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" podUID="0c763216-54a0-463c-943a-eb519ffb6816" Sep 13 00:08:23.908005 kubelet[3206]: E0913 00:08:23.892648 3206 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069"} Sep 13 00:08:23.908005 kubelet[3206]: E0913 00:08:23.907492 3206 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:23.908255 kubelet[3206]: E0913 00:08:23.907527 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qmtk8" podUID="2ce4e15f-f6d2-489b-b74d-a800f2ee80ad" Sep 13 00:08:24.129242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159-shm.mount: Deactivated successfully. Sep 13 00:08:24.129366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59-shm.mount: Deactivated successfully. Sep 13 00:08:24.129476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069-shm.mount: Deactivated successfully. Sep 13 00:08:29.453420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34388013.mount: Deactivated successfully. Sep 13 00:08:29.540005 containerd[1972]: time="2025-09-13T00:08:29.526115618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:29.547800 containerd[1972]: time="2025-09-13T00:08:29.547460022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:08:29.584257 containerd[1972]: time="2025-09-13T00:08:29.583681224Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:29.585092 containerd[1972]: time="2025-09-13T00:08:29.584787053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.043993142s" Sep 13 00:08:29.585092 containerd[1972]: time="2025-09-13T00:08:29.584831801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:08:29.585907 containerd[1972]: time="2025-09-13T00:08:29.585878899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:29.629003 containerd[1972]: time="2025-09-13T00:08:29.628522443Z" level=info msg="CreateContainer within sandbox \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:08:29.689951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445566643.mount: Deactivated successfully. Sep 13 00:08:29.710149 containerd[1972]: time="2025-09-13T00:08:29.710030482Z" level=info msg="CreateContainer within sandbox \"69dbef93c3c597957abcedf1064716364680686b2344aba281af0be76a38f1e8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a80e35e50440db8c364933bc974a0d48daa2f7de124b3842e9a3cf093e17fac\"" Sep 13 00:08:29.710609 containerd[1972]: time="2025-09-13T00:08:29.710583340Z" level=info msg="StartContainer for \"6a80e35e50440db8c364933bc974a0d48daa2f7de124b3842e9a3cf093e17fac\"" Sep 13 00:08:29.819547 systemd[1]: Started cri-containerd-6a80e35e50440db8c364933bc974a0d48daa2f7de124b3842e9a3cf093e17fac.scope - libcontainer container 6a80e35e50440db8c364933bc974a0d48daa2f7de124b3842e9a3cf093e17fac. Sep 13 00:08:29.881479 containerd[1972]: time="2025-09-13T00:08:29.881263165Z" level=info msg="StartContainer for \"6a80e35e50440db8c364933bc974a0d48daa2f7de124b3842e9a3cf093e17fac\" returns successfully" Sep 13 00:08:30.161989 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:08:30.163756 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:08:30.594064 containerd[1972]: time="2025-09-13T00:08:30.594013239Z" level=info msg="StopPodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\"" Sep 13 00:08:30.823705 kubelet[3206]: I0913 00:08:30.802597 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jb9s9" podStartSLOduration=2.492771462 podStartE2EDuration="19.774694857s" podCreationTimestamp="2025-09-13 00:08:11 +0000 UTC" firstStartedPulling="2025-09-13 00:08:12.304212533 +0000 UTC m=+21.095907513" lastFinishedPulling="2025-09-13 00:08:29.586135923 +0000 UTC m=+38.377830908" observedRunningTime="2025-09-13 00:08:30.737728861 +0000 UTC m=+39.529423850" watchObservedRunningTime="2025-09-13 00:08:30.774694857 +0000 UTC m=+39.566389855" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:30.775 [INFO][4626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:30.777 [INFO][4626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" iface="eth0" netns="/var/run/netns/cni-cccbf38e-a5d1-420f-f478-95863ba4ca1c" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:30.778 [INFO][4626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" iface="eth0" netns="/var/run/netns/cni-cccbf38e-a5d1-420f-f478-95863ba4ca1c" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:30.783 [INFO][4626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" iface="eth0" netns="/var/run/netns/cni-cccbf38e-a5d1-420f-f478-95863ba4ca1c" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:30.783 [INFO][4626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:30.784 [INFO][4626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.186 [INFO][4644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.194 [INFO][4644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.194 [INFO][4644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.210 [WARNING][4644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.210 [INFO][4644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.212 [INFO][4644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:31.216898 containerd[1972]: 2025-09-13 00:08:31.214 [INFO][4626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:31.223083 systemd[1]: run-netns-cni\x2dcccbf38e\x2da5d1\x2d420f\x2df478\x2d95863ba4ca1c.mount: Deactivated successfully. Sep 13 00:08:31.229567 containerd[1972]: time="2025-09-13T00:08:31.229508843Z" level=info msg="TearDown network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" successfully" Sep 13 00:08:31.229567 containerd[1972]: time="2025-09-13T00:08:31.229560233Z" level=info msg="StopPodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" returns successfully" Sep 13 00:08:31.413479 kubelet[3206]: I0913 00:08:31.413416 3206 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs4gp\" (UniqueName: \"kubernetes.io/projected/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-kube-api-access-zs4gp\") pod \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\" (UID: \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\") " Sep 13 00:08:31.413618 kubelet[3206]: I0913 00:08:31.413501 3206 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-ca-bundle\") pod \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\" (UID: \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\") " Sep 13 00:08:31.413618 kubelet[3206]: I0913 00:08:31.413525 3206 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-backend-key-pair\") pod \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\" (UID: \"82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0\") " Sep 13 00:08:31.419542 systemd[1]: var-lib-kubelet-pods-82bd0e8e\x2d02c0\x2d47c6\x2da6c0\x2dc61583a0d7d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzs4gp.mount: Deactivated successfully. Sep 13 00:08:31.422643 systemd[1]: var-lib-kubelet-pods-82bd0e8e\x2d02c0\x2d47c6\x2da6c0\x2dc61583a0d7d0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:08:31.423597 kubelet[3206]: I0913 00:08:31.418767 3206 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-kube-api-access-zs4gp" (OuterVolumeSpecName: "kube-api-access-zs4gp") pod "82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0" (UID: "82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0"). InnerVolumeSpecName "kube-api-access-zs4gp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:31.423597 kubelet[3206]: I0913 00:08:31.423197 3206 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0" (UID: "82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:31.430964 kubelet[3206]: I0913 00:08:31.423269 3206 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0" (UID: "82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:31.514182 kubelet[3206]: I0913 00:08:31.514035 3206 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zs4gp\" (UniqueName: \"kubernetes.io/projected/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-kube-api-access-zs4gp\") on node \"ip-172-31-16-22\" DevicePath \"\"" Sep 13 00:08:31.514182 kubelet[3206]: I0913 00:08:31.514095 3206 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-ca-bundle\") on node \"ip-172-31-16-22\" DevicePath \"\"" Sep 13 00:08:31.514182 kubelet[3206]: I0913 00:08:31.514107 3206 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0-whisker-backend-key-pair\") on node \"ip-172-31-16-22\" DevicePath \"\"" Sep 13 00:08:31.705564 systemd[1]: Removed slice kubepods-besteffort-pod82bd0e8e_02c0_47c6_a6c0_c61583a0d7d0.slice - libcontainer container kubepods-besteffort-pod82bd0e8e_02c0_47c6_a6c0_c61583a0d7d0.slice. Sep 13 00:08:31.895619 systemd[1]: Created slice kubepods-besteffort-pod6b297aae_3713_4f9a_808b_90d9b038e7aa.slice - libcontainer container kubepods-besteffort-pod6b297aae_3713_4f9a_808b_90d9b038e7aa.slice. Sep 13 00:08:32.019954 kubelet[3206]: I0913 00:08:32.019889 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b297aae-3713-4f9a-808b-90d9b038e7aa-whisker-backend-key-pair\") pod \"whisker-789c4fd884-ggjrm\" (UID: \"6b297aae-3713-4f9a-808b-90d9b038e7aa\") " pod="calico-system/whisker-789c4fd884-ggjrm" Sep 13 00:08:32.020524 kubelet[3206]: I0913 00:08:32.019966 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b297aae-3713-4f9a-808b-90d9b038e7aa-whisker-ca-bundle\") pod \"whisker-789c4fd884-ggjrm\" (UID: \"6b297aae-3713-4f9a-808b-90d9b038e7aa\") " pod="calico-system/whisker-789c4fd884-ggjrm" Sep 13 00:08:32.020524 kubelet[3206]: I0913 00:08:32.019992 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrkv2\" (UniqueName: \"kubernetes.io/projected/6b297aae-3713-4f9a-808b-90d9b038e7aa-kube-api-access-lrkv2\") pod \"whisker-789c4fd884-ggjrm\" (UID: \"6b297aae-3713-4f9a-808b-90d9b038e7aa\") " pod="calico-system/whisker-789c4fd884-ggjrm" Sep 13 00:08:32.206331 containerd[1972]: time="2025-09-13T00:08:32.205094857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-789c4fd884-ggjrm,Uid:6b297aae-3713-4f9a-808b-90d9b038e7aa,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:32.638321 systemd-networkd[1821]: califc7765d1ffe: Link UP Sep 13 00:08:32.639018 (udev-worker)[4604]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:08:32.641916 systemd-networkd[1821]: califc7765d1ffe: Gained carrier Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.408 [INFO][4783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.430 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0 whisker-789c4fd884- calico-system 6b297aae-3713-4f9a-808b-90d9b038e7aa 911 0 2025-09-13 00:08:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:789c4fd884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-22 whisker-789c4fd884-ggjrm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califc7765d1ffe [] [] }} ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.430 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.536 [INFO][4800] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" HandleID="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Workload="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.536 [INFO][4800] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" HandleID="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Workload="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038e080), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-22", "pod":"whisker-789c4fd884-ggjrm", "timestamp":"2025-09-13 00:08:32.536195429 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.536 [INFO][4800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.536 [INFO][4800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.536 [INFO][4800] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.555 [INFO][4800] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.575 [INFO][4800] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.583 [INFO][4800] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.586 [INFO][4800] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.589 [INFO][4800] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.589 [INFO][4800] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.592 [INFO][4800] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90 Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.599 [INFO][4800] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.613 [INFO][4800] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.65/26] block=192.168.65.64/26 handle="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.614 [INFO][4800] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.65/26] handle="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" host="ip-172-31-16-22" Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.614 [INFO][4800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:32.659773 containerd[1972]: 2025-09-13 00:08:32.614 [INFO][4800] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.65/26] IPv6=[] ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" HandleID="k8s-pod-network.751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Workload="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.662123 containerd[1972]: 2025-09-13 00:08:32.620 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0", GenerateName:"whisker-789c4fd884-", Namespace:"calico-system", SelfLink:"", UID:"6b297aae-3713-4f9a-808b-90d9b038e7aa", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"789c4fd884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"whisker-789c4fd884-ggjrm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc7765d1ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:32.662123 containerd[1972]: 2025-09-13 00:08:32.620 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.65/32] ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.662123 containerd[1972]: 2025-09-13 00:08:32.621 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc7765d1ffe ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.662123 containerd[1972]: 2025-09-13 00:08:32.638 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.662123 containerd[1972]: 2025-09-13 00:08:32.638 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0", GenerateName:"whisker-789c4fd884-", Namespace:"calico-system", SelfLink:"", UID:"6b297aae-3713-4f9a-808b-90d9b038e7aa", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"789c4fd884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90", Pod:"whisker-789c4fd884-ggjrm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc7765d1ffe", MAC:"66:aa:5c:dd:06:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:32.662123 containerd[1972]: 2025-09-13 00:08:32.655 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90" Namespace="calico-system" Pod="whisker-789c4fd884-ggjrm" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--789c4fd884--ggjrm-eth0" Sep 13 00:08:32.716340 containerd[1972]: time="2025-09-13T00:08:32.710768415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:32.716340 containerd[1972]: time="2025-09-13T00:08:32.710833978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:32.716340 containerd[1972]: time="2025-09-13T00:08:32.710872947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:32.716340 containerd[1972]: time="2025-09-13T00:08:32.710977200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:32.766519 systemd[1]: Started cri-containerd-751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90.scope - libcontainer container 751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90. Sep 13 00:08:32.878890 containerd[1972]: time="2025-09-13T00:08:32.878841370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-789c4fd884-ggjrm,Uid:6b297aae-3713-4f9a-808b-90d9b038e7aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90\"" Sep 13 00:08:32.881818 containerd[1972]: time="2025-09-13T00:08:32.881767928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:08:33.369082 kubelet[3206]: I0913 00:08:33.368989 3206 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0" path="/var/lib/kubelet/pods/82bd0e8e-02c0-47c6-a6c0-c61583a0d7d0/volumes" Sep 13 00:08:34.133823 containerd[1972]: time="2025-09-13T00:08:34.133762342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:34.136012 containerd[1972]: time="2025-09-13T00:08:34.135799758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:08:34.139810 containerd[1972]: time="2025-09-13T00:08:34.138287684Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:34.145991 containerd[1972]: time="2025-09-13T00:08:34.145942360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:34.146896 containerd[1972]: time="2025-09-13T00:08:34.146863887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.26497053s" Sep 13 00:08:34.147019 containerd[1972]: time="2025-09-13T00:08:34.147005776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:08:34.152919 containerd[1972]: time="2025-09-13T00:08:34.152876284Z" level=info msg="CreateContainer within sandbox \"751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:08:34.176951 containerd[1972]: time="2025-09-13T00:08:34.176879792Z" level=info msg="CreateContainer within sandbox \"751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"533b70cd02204dd64eea5e2b1ba5c2256c08538b0f19751eed1d79c74bbfcdc1\"" Sep 13 00:08:34.178785 containerd[1972]: time="2025-09-13T00:08:34.177667785Z" level=info msg="StartContainer for \"533b70cd02204dd64eea5e2b1ba5c2256c08538b0f19751eed1d79c74bbfcdc1\"" Sep 13 00:08:34.213640 systemd[1]: Started cri-containerd-533b70cd02204dd64eea5e2b1ba5c2256c08538b0f19751eed1d79c74bbfcdc1.scope - libcontainer container 533b70cd02204dd64eea5e2b1ba5c2256c08538b0f19751eed1d79c74bbfcdc1. Sep 13 00:08:34.270876 containerd[1972]: time="2025-09-13T00:08:34.270799416Z" level=info msg="StartContainer for \"533b70cd02204dd64eea5e2b1ba5c2256c08538b0f19751eed1d79c74bbfcdc1\" returns successfully" Sep 13 00:08:34.273881 containerd[1972]: time="2025-09-13T00:08:34.273685506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:08:34.368661 containerd[1972]: time="2025-09-13T00:08:34.368612411Z" level=info msg="StopPodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\"" Sep 13 00:08:34.369212 containerd[1972]: time="2025-09-13T00:08:34.369127749Z" level=info msg="StopPodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\"" Sep 13 00:08:34.586127 systemd-networkd[1821]: califc7765d1ffe: Gained IPv6LL Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.486 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.487 [INFO][4965] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" iface="eth0" netns="/var/run/netns/cni-6191ef9f-c1f2-0936-b3ed-630d5a03a962" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.487 [INFO][4965] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" iface="eth0" netns="/var/run/netns/cni-6191ef9f-c1f2-0936-b3ed-630d5a03a962" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.488 [INFO][4965] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" iface="eth0" netns="/var/run/netns/cni-6191ef9f-c1f2-0936-b3ed-630d5a03a962" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.488 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.488 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.581 [INFO][4976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.583 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.584 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.632 [WARNING][4976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.633 [INFO][4976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.665 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.672622 containerd[1972]: 2025-09-13 00:08:34.667 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:34.674232 containerd[1972]: time="2025-09-13T00:08:34.673268473Z" level=info msg="TearDown network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" successfully" Sep 13 00:08:34.674232 containerd[1972]: time="2025-09-13T00:08:34.673311423Z" level=info msg="StopPodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" returns successfully" Sep 13 00:08:34.679557 containerd[1972]: time="2025-09-13T00:08:34.679152628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhbhm,Uid:790eb649-ffde-4aab-abe6-6ba328fbc032,Namespace:kube-system,Attempt:1,}" Sep 13 00:08:34.682190 systemd[1]: run-netns-cni\x2d6191ef9f\x2dc1f2\x2d0936\x2db3ed\x2d630d5a03a962.mount: Deactivated successfully. Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.496 [INFO][4957] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.496 [INFO][4957] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" iface="eth0" netns="/var/run/netns/cni-413f24d9-04b9-33f6-d8d2-536d374edb3e" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.498 [INFO][4957] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" iface="eth0" netns="/var/run/netns/cni-413f24d9-04b9-33f6-d8d2-536d374edb3e" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.499 [INFO][4957] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" iface="eth0" netns="/var/run/netns/cni-413f24d9-04b9-33f6-d8d2-536d374edb3e" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.499 [INFO][4957] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.499 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.593 [INFO][4979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.594 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.665 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.688 [WARNING][4979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.688 [INFO][4979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.730 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.748425 containerd[1972]: 2025-09-13 00:08:34.734 [INFO][4957] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:34.749705 containerd[1972]: time="2025-09-13T00:08:34.749482958Z" level=info msg="TearDown network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" successfully" Sep 13 00:08:34.749705 containerd[1972]: time="2025-09-13T00:08:34.749527918Z" level=info msg="StopPodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" returns successfully" Sep 13 00:08:34.752063 containerd[1972]: time="2025-09-13T00:08:34.752014242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-m25gq,Uid:c150b87b-3909-4283-ab15-dcc8b6a0c68d,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:35.108244 systemd-networkd[1821]: cali5b8bcd04067: Link UP Sep 13 00:08:35.110170 systemd-networkd[1821]: cali5b8bcd04067: Gained carrier Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.841 [INFO][4991] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.876 [INFO][4991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0 coredns-668d6bf9bc- kube-system 790eb649-ffde-4aab-abe6-6ba328fbc032 933 0 2025-09-13 00:07:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-22 coredns-668d6bf9bc-dhbhm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b8bcd04067 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.876 [INFO][4991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.987 [INFO][5016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" HandleID="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.993 [INFO][5016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" HandleID="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5a90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-22", "pod":"coredns-668d6bf9bc-dhbhm", "timestamp":"2025-09-13 00:08:34.98731331 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.994 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.994 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:34.994 [INFO][5016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.008 [INFO][5016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.033 [INFO][5016] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.051 [INFO][5016] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.058 [INFO][5016] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.062 [INFO][5016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.062 [INFO][5016] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.065 [INFO][5016] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5 Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.073 [INFO][5016] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.086 [INFO][5016] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.66/26] block=192.168.65.64/26 handle="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.086 [INFO][5016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.66/26] handle="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" host="ip-172-31-16-22" Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.086 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.160022 containerd[1972]: 2025-09-13 00:08:35.088 [INFO][5016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.66/26] IPv6=[] ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" HandleID="k8s-pod-network.682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.163470 containerd[1972]: 2025-09-13 00:08:35.096 [INFO][4991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"790eb649-ffde-4aab-abe6-6ba328fbc032", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"coredns-668d6bf9bc-dhbhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b8bcd04067", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:35.163470 containerd[1972]: 2025-09-13 00:08:35.097 [INFO][4991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.66/32] ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.163470 containerd[1972]: 2025-09-13 00:08:35.097 [INFO][4991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b8bcd04067 ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.163470 containerd[1972]: 2025-09-13 00:08:35.110 [INFO][4991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.163470 containerd[1972]: 2025-09-13 00:08:35.112 [INFO][4991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"790eb649-ffde-4aab-abe6-6ba328fbc032", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5", Pod:"coredns-668d6bf9bc-dhbhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b8bcd04067", MAC:"ca:67:65:e9:f1:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:35.163470 containerd[1972]: 2025-09-13 00:08:35.155 [INFO][4991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhbhm" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:35.184868 systemd[1]: run-netns-cni\x2d413f24d9\x2d04b9\x2d33f6\x2dd8d2\x2d536d374edb3e.mount: Deactivated successfully. Sep 13 00:08:35.225143 containerd[1972]: time="2025-09-13T00:08:35.225038828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:35.225304 containerd[1972]: time="2025-09-13T00:08:35.225192228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:35.225304 containerd[1972]: time="2025-09-13T00:08:35.225264638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:35.225907 containerd[1972]: time="2025-09-13T00:08:35.225567679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:35.270100 systemd-networkd[1821]: cali63196819d08: Link UP Sep 13 00:08:35.272609 systemd-networkd[1821]: cali63196819d08: Gained carrier Sep 13 00:08:35.324661 systemd[1]: run-containerd-runc-k8s.io-682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5-runc.7bVpUK.mount: Deactivated successfully. Sep 13 00:08:35.336645 systemd[1]: Started cri-containerd-682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5.scope - libcontainer container 682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5. Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:34.870 [INFO][5000] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:34.891 [INFO][5000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0 goldmane-54d579b49d- calico-system c150b87b-3909-4283-ab15-dcc8b6a0c68d 934 0 2025-09-13 00:08:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-22 goldmane-54d579b49d-m25gq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali63196819d08 [] [] }} ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:34.891 [INFO][5000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:34.989 [INFO][5025] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" HandleID="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:34.994 [INFO][5025] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" HandleID="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-22", "pod":"goldmane-54d579b49d-m25gq", "timestamp":"2025-09-13 00:08:34.988134631 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:34.994 [INFO][5025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.086 [INFO][5025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.086 [INFO][5025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.116 [INFO][5025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.141 [INFO][5025] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.170 [INFO][5025] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.177 [INFO][5025] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.186 [INFO][5025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.187 [INFO][5025] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.192 [INFO][5025] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.205 [INFO][5025] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.228 [INFO][5025] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.67/26] block=192.168.65.64/26 handle="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.228 [INFO][5025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.67/26] handle="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" host="ip-172-31-16-22" Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.228 [INFO][5025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.337694 containerd[1972]: 2025-09-13 00:08:35.228 [INFO][5025] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.67/26] IPv6=[] ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" HandleID="k8s-pod-network.c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.342969 containerd[1972]: 2025-09-13 00:08:35.245 [INFO][5000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c150b87b-3909-4283-ab15-dcc8b6a0c68d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"goldmane-54d579b49d-m25gq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63196819d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:35.342969 containerd[1972]: 2025-09-13 00:08:35.246 [INFO][5000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.67/32] ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.342969 containerd[1972]: 2025-09-13 00:08:35.246 [INFO][5000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63196819d08 ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.342969 containerd[1972]: 2025-09-13 00:08:35.276 [INFO][5000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.342969 containerd[1972]: 2025-09-13 00:08:35.276 [INFO][5000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c150b87b-3909-4283-ab15-dcc8b6a0c68d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a", Pod:"goldmane-54d579b49d-m25gq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63196819d08", MAC:"8a:12:39:67:bc:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:35.342969 containerd[1972]: 2025-09-13 00:08:35.322 [INFO][5000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a" Namespace="calico-system" Pod="goldmane-54d579b49d-m25gq" WorkloadEndpoint="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:35.385941 containerd[1972]: time="2025-09-13T00:08:35.384114803Z" level=info msg="StopPodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\"" Sep 13 00:08:35.453816 containerd[1972]: time="2025-09-13T00:08:35.438898446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:35.453816 containerd[1972]: time="2025-09-13T00:08:35.453489155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:35.453816 containerd[1972]: time="2025-09-13T00:08:35.453523584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:35.454986 containerd[1972]: time="2025-09-13T00:08:35.454074908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:35.531899 systemd[1]: Started sshd@9-172.31.16.22:22-139.178.89.65:58792.service - OpenSSH per-connection server daemon (139.178.89.65:58792). Sep 13 00:08:35.570514 containerd[1972]: time="2025-09-13T00:08:35.566026540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhbhm,Uid:790eb649-ffde-4aab-abe6-6ba328fbc032,Namespace:kube-system,Attempt:1,} returns sandbox id \"682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5\"" Sep 13 00:08:35.586335 containerd[1972]: time="2025-09-13T00:08:35.584535860Z" level=info msg="CreateContainer within sandbox \"682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:08:35.585968 systemd[1]: Started cri-containerd-c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a.scope - libcontainer container c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a. Sep 13 00:08:35.681830 containerd[1972]: time="2025-09-13T00:08:35.681352682Z" level=info msg="CreateContainer within sandbox \"682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91e49589b8d7c8ec9b51eb78a80a25d252aa2c7763a9f370d1dae28e6b7e8cbf\"" Sep 13 00:08:35.688440 containerd[1972]: time="2025-09-13T00:08:35.686667437Z" level=info msg="StartContainer for \"91e49589b8d7c8ec9b51eb78a80a25d252aa2c7763a9f370d1dae28e6b7e8cbf\"" Sep 13 00:08:35.784895 sshd[5140]: Accepted publickey for core from 139.178.89.65 port 58792 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:35.792926 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:35.806765 systemd-logind[1957]: New session 10 of user core. Sep 13 00:08:35.811887 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:08:35.827587 containerd[1972]: time="2025-09-13T00:08:35.826624785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-m25gq,Uid:c150b87b-3909-4283-ab15-dcc8b6a0c68d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a\"" Sep 13 00:08:35.855668 systemd[1]: Started cri-containerd-91e49589b8d7c8ec9b51eb78a80a25d252aa2c7763a9f370d1dae28e6b7e8cbf.scope - libcontainer container 91e49589b8d7c8ec9b51eb78a80a25d252aa2c7763a9f370d1dae28e6b7e8cbf. Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.688 [INFO][5117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.694 [INFO][5117] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" iface="eth0" netns="/var/run/netns/cni-8ce9da86-3577-da3b-3d0b-c69c7d9d9b56" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.701 [INFO][5117] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" iface="eth0" netns="/var/run/netns/cni-8ce9da86-3577-da3b-3d0b-c69c7d9d9b56" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.705 [INFO][5117] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" iface="eth0" netns="/var/run/netns/cni-8ce9da86-3577-da3b-3d0b-c69c7d9d9b56" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.705 [INFO][5117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.705 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.849 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.850 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.850 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.870 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.870 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.876 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:35.884790 containerd[1972]: 2025-09-13 00:08:35.881 [INFO][5117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:35.886137 containerd[1972]: time="2025-09-13T00:08:35.885443772Z" level=info msg="TearDown network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" successfully" Sep 13 00:08:35.886137 containerd[1972]: time="2025-09-13T00:08:35.885477673Z" level=info msg="StopPodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" returns successfully" Sep 13 00:08:35.887032 containerd[1972]: time="2025-09-13T00:08:35.886608798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2hfvw,Uid:d46336d8-eb49-44a4-a6a0-97396eeb5284,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:35.957671 containerd[1972]: time="2025-09-13T00:08:35.957555614Z" level=info msg="StartContainer for \"91e49589b8d7c8ec9b51eb78a80a25d252aa2c7763a9f370d1dae28e6b7e8cbf\" returns successfully" Sep 13 00:08:36.189279 systemd[1]: run-netns-cni\x2d8ce9da86\x2d3577\x2dda3b\x2d3d0b\x2dc69c7d9d9b56.mount: Deactivated successfully. Sep 13 00:08:36.306175 systemd-networkd[1821]: calie2e53f0d1f2: Link UP Sep 13 00:08:36.309786 systemd-networkd[1821]: calie2e53f0d1f2: Gained carrier Sep 13 00:08:36.374219 containerd[1972]: time="2025-09-13T00:08:36.373742342Z" level=info msg="StopPodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\"" Sep 13 00:08:36.377994 containerd[1972]: time="2025-09-13T00:08:36.376755910Z" level=info msg="StopPodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\"" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.032 [INFO][5203] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.055 [INFO][5203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0 csi-node-driver- calico-system d46336d8-eb49-44a4-a6a0-97396eeb5284 972 0 2025-09-13 00:08:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-22 csi-node-driver-2hfvw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2e53f0d1f2 [] [] }} ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.055 [INFO][5203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.143 [INFO][5220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" HandleID="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.145 [INFO][5220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" HandleID="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-22", "pod":"csi-node-driver-2hfvw", "timestamp":"2025-09-13 00:08:36.142975144 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.146 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.146 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.146 [INFO][5220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.183 [INFO][5220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.201 [INFO][5220] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.210 [INFO][5220] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.212 [INFO][5220] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.217 [INFO][5220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.217 [INFO][5220] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.221 [INFO][5220] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0 Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.239 [INFO][5220] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.261 [INFO][5220] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.68/26] block=192.168.65.64/26 handle="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.261 [INFO][5220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.68/26] handle="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" host="ip-172-31-16-22" Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.262 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:36.385417 containerd[1972]: 2025-09-13 00:08:36.262 [INFO][5220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.68/26] IPv6=[] ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" HandleID="k8s-pod-network.f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.387901 containerd[1972]: 2025-09-13 00:08:36.288 [INFO][5203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d46336d8-eb49-44a4-a6a0-97396eeb5284", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"csi-node-driver-2hfvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2e53f0d1f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.387901 containerd[1972]: 2025-09-13 00:08:36.289 [INFO][5203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.68/32] ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.387901 containerd[1972]: 2025-09-13 00:08:36.290 [INFO][5203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2e53f0d1f2 ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.387901 containerd[1972]: 2025-09-13 00:08:36.313 [INFO][5203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.387901 containerd[1972]: 2025-09-13 00:08:36.314 [INFO][5203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d46336d8-eb49-44a4-a6a0-97396eeb5284", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0", Pod:"csi-node-driver-2hfvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2e53f0d1f2", MAC:"be:de:81:d4:29:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:36.387901 containerd[1972]: 2025-09-13 00:08:36.357 [INFO][5203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0" Namespace="calico-system" Pod="csi-node-driver-2hfvw" WorkloadEndpoint="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:36.650332 containerd[1972]: time="2025-09-13T00:08:36.645669301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:36.650332 containerd[1972]: time="2025-09-13T00:08:36.645773225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:36.650332 containerd[1972]: time="2025-09-13T00:08:36.645798304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:36.650332 containerd[1972]: time="2025-09-13T00:08:36.645929396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:36.762760 systemd-networkd[1821]: cali5b8bcd04067: Gained IPv6LL Sep 13 00:08:36.767678 systemd[1]: Started cri-containerd-f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0.scope - libcontainer container f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0. Sep 13 00:08:36.891618 systemd-networkd[1821]: cali63196819d08: Gained IPv6LL Sep 13 00:08:37.008209 kubelet[3206]: I0913 00:08:37.008116 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dhbhm" podStartSLOduration=41.008091741 podStartE2EDuration="41.008091741s" podCreationTimestamp="2025-09-13 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:36.994820677 +0000 UTC m=+45.786515660" watchObservedRunningTime="2025-09-13 00:08:37.008091741 +0000 UTC m=+45.799786727" Sep 13 00:08:37.196451 sshd[5140]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:37.210264 systemd[1]: sshd@9-172.31.16.22:22-139.178.89.65:58792.service: Deactivated successfully. Sep 13 00:08:37.217960 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:08:37.221888 systemd-logind[1957]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:08:37.226719 systemd-logind[1957]: Removed session 10. Sep 13 00:08:37.232743 containerd[1972]: time="2025-09-13T00:08:37.231497375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2hfvw,Uid:d46336d8-eb49-44a4-a6a0-97396eeb5284,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0\"" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.032 [INFO][5279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.032 [INFO][5279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" iface="eth0" netns="/var/run/netns/cni-1144ea06-385a-0475-ca56-0a26b0b0aa76" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.033 [INFO][5279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" iface="eth0" netns="/var/run/netns/cni-1144ea06-385a-0475-ca56-0a26b0b0aa76" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.035 [INFO][5279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" iface="eth0" netns="/var/run/netns/cni-1144ea06-385a-0475-ca56-0a26b0b0aa76" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.035 [INFO][5279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.035 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.249 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.251 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.251 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.280 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.281 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.286 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:37.304198 containerd[1972]: 2025-09-13 00:08:37.289 [INFO][5279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:37.309203 containerd[1972]: time="2025-09-13T00:08:37.304725536Z" level=info msg="TearDown network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" successfully" Sep 13 00:08:37.309203 containerd[1972]: time="2025-09-13T00:08:37.304767878Z" level=info msg="StopPodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" returns successfully" Sep 13 00:08:37.309203 containerd[1972]: time="2025-09-13T00:08:37.307318801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-rx6mc,Uid:0c763216-54a0-463c-943a-eb519ffb6816,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:08:37.319991 systemd[1]: run-netns-cni\x2d1144ea06\x2d385a\x2d0475\x2dca56\x2d0a26b0b0aa76.mount: Deactivated successfully. Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.011 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.011 [INFO][5270] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" iface="eth0" netns="/var/run/netns/cni-e7dd86ad-38e0-919b-ba86-577ca6c69f47" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.012 [INFO][5270] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" iface="eth0" netns="/var/run/netns/cni-e7dd86ad-38e0-919b-ba86-577ca6c69f47" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.018 [INFO][5270] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" iface="eth0" netns="/var/run/netns/cni-e7dd86ad-38e0-919b-ba86-577ca6c69f47" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.018 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.018 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.264 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.271 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.286 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.314 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.314 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.323 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:37.342476 containerd[1972]: 2025-09-13 00:08:37.326 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:37.344686 containerd[1972]: time="2025-09-13T00:08:37.343299585Z" level=info msg="TearDown network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" successfully" Sep 13 00:08:37.344686 containerd[1972]: time="2025-09-13T00:08:37.343343138Z" level=info msg="StopPodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" returns successfully" Sep 13 00:08:37.348702 containerd[1972]: time="2025-09-13T00:08:37.348074942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qmtk8,Uid:2ce4e15f-f6d2-489b-b74d-a800f2ee80ad,Namespace:kube-system,Attempt:1,}" Sep 13 00:08:37.357099 systemd[1]: run-netns-cni\x2de7dd86ad\x2d38e0\x2d919b\x2dba86\x2d577ca6c69f47.mount: Deactivated successfully. Sep 13 00:08:37.369533 containerd[1972]: time="2025-09-13T00:08:37.367837764Z" level=info msg="StopPodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\"" Sep 13 00:08:37.859002 systemd-networkd[1821]: calid75b7d3c815: Link UP Sep 13 00:08:37.864696 systemd-networkd[1821]: calid75b7d3c815: Gained carrier Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.523 [INFO][5362] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.573 [INFO][5362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0 calico-apiserver-6c7768f9b8- calico-apiserver 0c763216-54a0-463c-943a-eb519ffb6816 991 0 2025-09-13 00:08:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7768f9b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-22 calico-apiserver-6c7768f9b8-rx6mc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid75b7d3c815 [] [] }} ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.574 [INFO][5362] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.707 [INFO][5409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" HandleID="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.708 [INFO][5409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" HandleID="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000393e40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-22", "pod":"calico-apiserver-6c7768f9b8-rx6mc", "timestamp":"2025-09-13 00:08:37.707686608 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.711 [INFO][5409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.711 [INFO][5409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.712 [INFO][5409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.739 [INFO][5409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.749 [INFO][5409] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.771 [INFO][5409] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.778 [INFO][5409] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.787 [INFO][5409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.787 [INFO][5409] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.791 [INFO][5409] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312 Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.801 [INFO][5409] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.825 [INFO][5409] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.69/26] block=192.168.65.64/26 handle="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.825 [INFO][5409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.69/26] handle="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" host="ip-172-31-16-22" Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.825 [INFO][5409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:37.931288 containerd[1972]: 2025-09-13 00:08:37.826 [INFO][5409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.69/26] IPv6=[] ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" HandleID="k8s-pod-network.a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.935142 containerd[1972]: 2025-09-13 00:08:37.832 [INFO][5362] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c763216-54a0-463c-943a-eb519ffb6816", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"calico-apiserver-6c7768f9b8-rx6mc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid75b7d3c815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:37.935142 containerd[1972]: 2025-09-13 00:08:37.833 [INFO][5362] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.69/32] ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.935142 containerd[1972]: 2025-09-13 00:08:37.833 [INFO][5362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid75b7d3c815 ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.935142 containerd[1972]: 2025-09-13 00:08:37.866 [INFO][5362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:37.935142 containerd[1972]: 2025-09-13 00:08:37.872 [INFO][5362] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c763216-54a0-463c-943a-eb519ffb6816", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312", Pod:"calico-apiserver-6c7768f9b8-rx6mc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid75b7d3c815", MAC:"5e:a9:c5:5e:38:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:37.935142 containerd[1972]: 2025-09-13 00:08:37.921 [INFO][5362] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-rx6mc" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:38.031755 systemd-networkd[1821]: cali86efde001a4: Link UP Sep 13 00:08:38.048372 systemd-networkd[1821]: cali86efde001a4: Gained carrier Sep 13 00:08:38.097222 containerd[1972]: time="2025-09-13T00:08:38.096654406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:38.097222 containerd[1972]: time="2025-09-13T00:08:38.096750003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:38.097222 containerd[1972]: time="2025-09-13T00:08:38.096904437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:38.098686 containerd[1972]: time="2025-09-13T00:08:38.097987896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:38.134990 systemd[1]: Started cri-containerd-a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312.scope - libcontainer container a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312. Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.584 [INFO][5384] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.614 [INFO][5384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0 coredns-668d6bf9bc- kube-system 2ce4e15f-f6d2-489b-b74d-a800f2ee80ad 990 0 2025-09-13 00:07:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-22 coredns-668d6bf9bc-qmtk8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86efde001a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.615 [INFO][5384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.813 [INFO][5424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" HandleID="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.815 [INFO][5424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" HandleID="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-22", "pod":"coredns-668d6bf9bc-qmtk8", "timestamp":"2025-09-13 00:08:37.813506086 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.816 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.827 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.827 [INFO][5424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.847 [INFO][5424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.879 [INFO][5424] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.915 [INFO][5424] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.929 [INFO][5424] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.937 [INFO][5424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.938 [INFO][5424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.947 [INFO][5424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.964 [INFO][5424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.989 [INFO][5424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.70/26] block=192.168.65.64/26 handle="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.989 [INFO][5424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.70/26] handle="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" host="ip-172-31-16-22" Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.989 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:38.139331 containerd[1972]: 2025-09-13 00:08:37.989 [INFO][5424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.70/26] IPv6=[] ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" HandleID="k8s-pod-network.9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.141999 containerd[1972]: 2025-09-13 00:08:38.003 [INFO][5384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"coredns-668d6bf9bc-qmtk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86efde001a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:38.141999 containerd[1972]: 2025-09-13 00:08:38.004 [INFO][5384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.70/32] ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.141999 containerd[1972]: 2025-09-13 00:08:38.004 [INFO][5384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86efde001a4 ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.141999 containerd[1972]: 2025-09-13 00:08:38.063 [INFO][5384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.141999 containerd[1972]: 2025-09-13 00:08:38.064 [INFO][5384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa", Pod:"coredns-668d6bf9bc-qmtk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86efde001a4", MAC:"ca:d7:c2:a0:6f:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:38.141999 containerd[1972]: 2025-09-13 00:08:38.112 [INFO][5384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-qmtk8" WorkloadEndpoint="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.613 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.613 [INFO][5382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" iface="eth0" netns="/var/run/netns/cni-1485a671-13e3-9d43-8b5a-24770ac7d485" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.617 [INFO][5382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" iface="eth0" netns="/var/run/netns/cni-1485a671-13e3-9d43-8b5a-24770ac7d485" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.619 [INFO][5382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" iface="eth0" netns="/var/run/netns/cni-1485a671-13e3-9d43-8b5a-24770ac7d485" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.619 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.621 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.820 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.821 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:37.991 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:38.090 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:38.091 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:38.102 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:38.153448 containerd[1972]: 2025-09-13 00:08:38.131 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:38.156028 containerd[1972]: time="2025-09-13T00:08:38.155347379Z" level=info msg="TearDown network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" successfully" Sep 13 00:08:38.156028 containerd[1972]: time="2025-09-13T00:08:38.155758374Z" level=info msg="StopPodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" returns successfully" Sep 13 00:08:38.164490 containerd[1972]: time="2025-09-13T00:08:38.164211625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d996465f-bdr2b,Uid:48dee0db-351b-4286-ac8b-c0ef5144392c,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:38.172066 systemd-networkd[1821]: calie2e53f0d1f2: Gained IPv6LL Sep 13 00:08:38.229544 containerd[1972]: time="2025-09-13T00:08:38.224880592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:38.229544 containerd[1972]: time="2025-09-13T00:08:38.224949922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:38.229544 containerd[1972]: time="2025-09-13T00:08:38.224964128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:38.229544 containerd[1972]: time="2025-09-13T00:08:38.225049727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:38.324223 systemd[1]: run-netns-cni\x2d1485a671\x2d13e3\x2d9d43\x2d8b5a\x2d24770ac7d485.mount: Deactivated successfully. Sep 13 00:08:38.368638 containerd[1972]: time="2025-09-13T00:08:38.368592717Z" level=info msg="StopPodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\"" Sep 13 00:08:38.385263 systemd[1]: Started cri-containerd-9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa.scope - libcontainer container 9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa. Sep 13 00:08:38.475553 containerd[1972]: time="2025-09-13T00:08:38.475351098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-rx6mc,Uid:0c763216-54a0-463c-943a-eb519ffb6816,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312\"" Sep 13 00:08:38.687495 containerd[1972]: time="2025-09-13T00:08:38.687339539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qmtk8,Uid:2ce4e15f-f6d2-489b-b74d-a800f2ee80ad,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa\"" Sep 13 00:08:38.700466 containerd[1972]: time="2025-09-13T00:08:38.700415154Z" level=info msg="CreateContainer within sandbox \"9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:08:38.779248 containerd[1972]: time="2025-09-13T00:08:38.778759281Z" level=info msg="CreateContainer within sandbox \"9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"338968ecbf7ea3950b2c81dda1305efbabbd869e48dd8c842ee576e0ec083d57\"" Sep 13 00:08:38.781702 containerd[1972]: time="2025-09-13T00:08:38.781270213Z" level=info msg="StartContainer for \"338968ecbf7ea3950b2c81dda1305efbabbd869e48dd8c842ee576e0ec083d57\"" Sep 13 00:08:38.878199 systemd-networkd[1821]: cali5805967301d: Link UP Sep 13 00:08:38.882291 systemd-networkd[1821]: cali5805967301d: Gained carrier Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.609 [INFO][5543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.611 [INFO][5543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" iface="eth0" netns="/var/run/netns/cni-4695828a-d118-62c7-5d9e-3bbc676a67af" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.611 [INFO][5543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" iface="eth0" netns="/var/run/netns/cni-4695828a-d118-62c7-5d9e-3bbc676a67af" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.614 [INFO][5543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" iface="eth0" netns="/var/run/netns/cni-4695828a-d118-62c7-5d9e-3bbc676a67af" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.614 [INFO][5543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.614 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.805 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.805 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.805 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.856 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.859 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.881 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:38.898336 containerd[1972]: 2025-09-13 00:08:38.890 [INFO][5543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:38.900606 containerd[1972]: time="2025-09-13T00:08:38.899722261Z" level=info msg="TearDown network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" successfully" Sep 13 00:08:38.900606 containerd[1972]: time="2025-09-13T00:08:38.899874287Z" level=info msg="StopPodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" returns successfully" Sep 13 00:08:38.901878 containerd[1972]: time="2025-09-13T00:08:38.901825555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-wmtxh,Uid:1c5637eb-c80d-4a9c-92f1-4b8bb3195348,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.355 [INFO][5511] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.463 [INFO][5511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0 calico-kube-controllers-86d996465f- calico-system 48dee0db-351b-4286-ac8b-c0ef5144392c 997 0 2025-09-13 00:08:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d996465f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-22 calico-kube-controllers-86d996465f-bdr2b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5805967301d [] [] }} ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.463 [INFO][5511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.690 [INFO][5563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" HandleID="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.691 [INFO][5563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" HandleID="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-22", "pod":"calico-kube-controllers-86d996465f-bdr2b", "timestamp":"2025-09-13 00:08:38.689856094 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.691 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.692 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.692 [INFO][5563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.712 [INFO][5563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.720 [INFO][5563] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.745 [INFO][5563] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.749 [INFO][5563] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.755 [INFO][5563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.755 [INFO][5563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.761 [INFO][5563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30 Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.777 [INFO][5563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.802 [INFO][5563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.71/26] block=192.168.65.64/26 handle="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.803 [INFO][5563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.71/26] handle="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" host="ip-172-31-16-22" Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.804 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:38.924807 containerd[1972]: 2025-09-13 00:08:38.804 [INFO][5563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.71/26] IPv6=[] ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" HandleID="k8s-pod-network.207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.929342 containerd[1972]: 2025-09-13 00:08:38.831 [INFO][5511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0", GenerateName:"calico-kube-controllers-86d996465f-", Namespace:"calico-system", SelfLink:"", UID:"48dee0db-351b-4286-ac8b-c0ef5144392c", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d996465f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"calico-kube-controllers-86d996465f-bdr2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5805967301d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:38.929342 containerd[1972]: 2025-09-13 00:08:38.831 [INFO][5511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.71/32] ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.929342 containerd[1972]: 2025-09-13 00:08:38.831 [INFO][5511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5805967301d ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.929342 containerd[1972]: 2025-09-13 00:08:38.885 [INFO][5511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.929342 containerd[1972]: 2025-09-13 00:08:38.887 [INFO][5511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0", GenerateName:"calico-kube-controllers-86d996465f-", Namespace:"calico-system", SelfLink:"", UID:"48dee0db-351b-4286-ac8b-c0ef5144392c", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d996465f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30", Pod:"calico-kube-controllers-86d996465f-bdr2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5805967301d", MAC:"d2:62:b0:42:43:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:38.929342 containerd[1972]: 2025-09-13 00:08:38.911 [INFO][5511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30" Namespace="calico-system" Pod="calico-kube-controllers-86d996465f-bdr2b" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:38.938955 systemd[1]: Started cri-containerd-338968ecbf7ea3950b2c81dda1305efbabbd869e48dd8c842ee576e0ec083d57.scope - libcontainer container 338968ecbf7ea3950b2c81dda1305efbabbd869e48dd8c842ee576e0ec083d57. Sep 13 00:08:39.088671 containerd[1972]: time="2025-09-13T00:08:39.085621452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:39.088671 containerd[1972]: time="2025-09-13T00:08:39.085730914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:39.088671 containerd[1972]: time="2025-09-13T00:08:39.085754587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:39.088671 containerd[1972]: time="2025-09-13T00:08:39.085891757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:39.104405 containerd[1972]: time="2025-09-13T00:08:39.101971927Z" level=info msg="StartContainer for \"338968ecbf7ea3950b2c81dda1305efbabbd869e48dd8c842ee576e0ec083d57\" returns successfully" Sep 13 00:08:39.178704 systemd[1]: Started cri-containerd-207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30.scope - libcontainer container 207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30. Sep 13 00:08:39.197259 systemd-networkd[1821]: cali86efde001a4: Gained IPv6LL Sep 13 00:08:39.317792 systemd[1]: run-netns-cni\x2d4695828a\x2dd118\x2d62c7\x2d5d9e\x2d3bbc676a67af.mount: Deactivated successfully. Sep 13 00:08:39.397489 systemd-networkd[1821]: cali343be28e32b: Link UP Sep 13 00:08:39.400261 systemd-networkd[1821]: cali343be28e32b: Gained carrier Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.131 [INFO][5624] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.198 [INFO][5624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0 calico-apiserver-6c7768f9b8- calico-apiserver 1c5637eb-c80d-4a9c-92f1-4b8bb3195348 1014 0 2025-09-13 00:08:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7768f9b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-22 calico-apiserver-6c7768f9b8-wmtxh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali343be28e32b [] [] }} ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.199 [INFO][5624] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.279 [INFO][5685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" HandleID="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.280 [INFO][5685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" HandleID="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-22", "pod":"calico-apiserver-6c7768f9b8-wmtxh", "timestamp":"2025-09-13 00:08:39.279760317 +0000 UTC"}, Hostname:"ip-172-31-16-22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.280 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.280 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.280 [INFO][5685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-22' Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.298 [INFO][5685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.317 [INFO][5685] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.332 [INFO][5685] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.335 [INFO][5685] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.339 [INFO][5685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.339 [INFO][5685] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.342 [INFO][5685] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.355 [INFO][5685] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.383 [INFO][5685] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.65.72/26] block=192.168.65.64/26 handle="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.383 [INFO][5685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.72/26] handle="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" host="ip-172-31-16-22" Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.383 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:39.451111 containerd[1972]: 2025-09-13 00:08:39.383 [INFO][5685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.72/26] IPv6=[] ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" HandleID="k8s-pod-network.f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.452265 containerd[1972]: 2025-09-13 00:08:39.389 [INFO][5624] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c5637eb-c80d-4a9c-92f1-4b8bb3195348", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"", Pod:"calico-apiserver-6c7768f9b8-wmtxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali343be28e32b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:39.452265 containerd[1972]: 2025-09-13 00:08:39.390 [INFO][5624] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.72/32] ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.452265 containerd[1972]: 2025-09-13 00:08:39.390 [INFO][5624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali343be28e32b ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.452265 containerd[1972]: 2025-09-13 00:08:39.400 [INFO][5624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.452265 containerd[1972]: 2025-09-13 00:08:39.401 [INFO][5624] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c5637eb-c80d-4a9c-92f1-4b8bb3195348", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd", Pod:"calico-apiserver-6c7768f9b8-wmtxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali343be28e32b", MAC:"1e:6c:58:17:91:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:39.452265 containerd[1972]: 2025-09-13 00:08:39.430 [INFO][5624] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd" Namespace="calico-apiserver" Pod="calico-apiserver-6c7768f9b8-wmtxh" WorkloadEndpoint="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:39.557936 containerd[1972]: time="2025-09-13T00:08:39.556462034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:39.557936 containerd[1972]: time="2025-09-13T00:08:39.556572742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:39.557936 containerd[1972]: time="2025-09-13T00:08:39.556603845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:39.557936 containerd[1972]: time="2025-09-13T00:08:39.556736731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:39.625714 systemd[1]: Started cri-containerd-f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd.scope - libcontainer container f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd. Sep 13 00:08:39.729532 containerd[1972]: time="2025-09-13T00:08:39.729477959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d996465f-bdr2b,Uid:48dee0db-351b-4286-ac8b-c0ef5144392c,Namespace:calico-system,Attempt:1,} returns sandbox id \"207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30\"" Sep 13 00:08:39.774778 systemd-networkd[1821]: calid75b7d3c815: Gained IPv6LL Sep 13 00:08:39.915025 containerd[1972]: time="2025-09-13T00:08:39.914971004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7768f9b8-wmtxh,Uid:1c5637eb-c80d-4a9c-92f1-4b8bb3195348,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd\"" Sep 13 00:08:39.984201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742517173.mount: Deactivated successfully. Sep 13 00:08:40.024264 containerd[1972]: time="2025-09-13T00:08:40.023600196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:40.027675 containerd[1972]: time="2025-09-13T00:08:40.026911322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:08:40.032036 containerd[1972]: time="2025-09-13T00:08:40.031985629Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:40.040884 containerd[1972]: time="2025-09-13T00:08:40.040356901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:40.042801 containerd[1972]: time="2025-09-13T00:08:40.042748010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.768658014s" Sep 13 00:08:40.042801 containerd[1972]: time="2025-09-13T00:08:40.042802683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:08:40.044599 containerd[1972]: time="2025-09-13T00:08:40.044556325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:08:40.046615 containerd[1972]: time="2025-09-13T00:08:40.046571222Z" level=info msg="CreateContainer within sandbox \"751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:08:40.077950 containerd[1972]: time="2025-09-13T00:08:40.077898950Z" level=info msg="CreateContainer within sandbox \"751459895b08d854422b46f459509e424f844dcd56ff85410e527263b311de90\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dd0df901a90282638c206b50698e96ae5f36d0d27c59e1cb15ca46b1c2e0c46e\"" Sep 13 00:08:40.079968 containerd[1972]: time="2025-09-13T00:08:40.079926695Z" level=info msg="StartContainer for \"dd0df901a90282638c206b50698e96ae5f36d0d27c59e1cb15ca46b1c2e0c46e\"" Sep 13 00:08:40.101775 kubelet[3206]: I0913 00:08:40.101703 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qmtk8" podStartSLOduration=44.101678307 podStartE2EDuration="44.101678307s" podCreationTimestamp="2025-09-13 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:40.050765306 +0000 UTC m=+48.842460294" watchObservedRunningTime="2025-09-13 00:08:40.101678307 +0000 UTC m=+48.893373295" Sep 13 00:08:40.130649 systemd[1]: Started cri-containerd-dd0df901a90282638c206b50698e96ae5f36d0d27c59e1cb15ca46b1c2e0c46e.scope - libcontainer container dd0df901a90282638c206b50698e96ae5f36d0d27c59e1cb15ca46b1c2e0c46e. Sep 13 00:08:40.323891 containerd[1972]: time="2025-09-13T00:08:40.323769268Z" level=info msg="StartContainer for \"dd0df901a90282638c206b50698e96ae5f36d0d27c59e1cb15ca46b1c2e0c46e\" returns successfully" Sep 13 00:08:40.504920 kubelet[3206]: I0913 00:08:40.504869 3206 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:40.541048 systemd-networkd[1821]: cali343be28e32b: Gained IPv6LL Sep 13 00:08:40.730613 systemd-networkd[1821]: cali5805967301d: Gained IPv6LL Sep 13 00:08:40.921439 kernel: bpftool[5835]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:08:41.074717 kubelet[3206]: I0913 00:08:41.074534 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-789c4fd884-ggjrm" podStartSLOduration=2.910795226 podStartE2EDuration="10.074511614s" podCreationTimestamp="2025-09-13 00:08:31 +0000 UTC" firstStartedPulling="2025-09-13 00:08:32.880357673 +0000 UTC m=+41.672052651" lastFinishedPulling="2025-09-13 00:08:40.044074057 +0000 UTC m=+48.835769039" observedRunningTime="2025-09-13 00:08:41.073449471 +0000 UTC m=+49.865144461" watchObservedRunningTime="2025-09-13 00:08:41.074511614 +0000 UTC m=+49.866206600" Sep 13 00:08:41.602911 systemd-networkd[1821]: vxlan.calico: Link UP Sep 13 00:08:41.602937 systemd-networkd[1821]: vxlan.calico: Gained carrier Sep 13 00:08:42.066598 (udev-worker)[4605]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:08:42.242773 systemd[1]: Started sshd@10-172.31.16.22:22-139.178.89.65:43132.service - OpenSSH per-connection server daemon (139.178.89.65:43132). Sep 13 00:08:42.475079 sshd[5914]: Accepted publickey for core from 139.178.89.65 port 43132 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:42.480995 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:42.493467 systemd-logind[1957]: New session 11 of user core. Sep 13 00:08:42.499577 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:08:43.285359 sshd[5914]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:43.289999 systemd[1]: sshd@10-172.31.16.22:22-139.178.89.65:43132.service: Deactivated successfully. Sep 13 00:08:43.292474 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:08:43.294925 systemd-logind[1957]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:08:43.296431 systemd-logind[1957]: Removed session 11. Sep 13 00:08:43.609529 systemd-networkd[1821]: vxlan.calico: Gained IPv6LL Sep 13 00:08:44.292102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031365488.mount: Deactivated successfully. Sep 13 00:08:45.252338 containerd[1972]: time="2025-09-13T00:08:45.252283047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:45.256432 containerd[1972]: time="2025-09-13T00:08:45.255990831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:08:45.259623 containerd[1972]: time="2025-09-13T00:08:45.259536487Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:45.265049 containerd[1972]: time="2025-09-13T00:08:45.264994636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:45.268102 containerd[1972]: time="2025-09-13T00:08:45.267621906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.223013096s" Sep 13 00:08:45.268102 containerd[1972]: time="2025-09-13T00:08:45.267671279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:08:45.306527 containerd[1972]: time="2025-09-13T00:08:45.306474715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:08:45.320449 containerd[1972]: time="2025-09-13T00:08:45.319905655Z" level=info msg="CreateContainer within sandbox \"c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:08:45.376336 containerd[1972]: time="2025-09-13T00:08:45.376277608Z" level=info msg="CreateContainer within sandbox \"c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6e6a1275fa8bc393bb487f74b882d041e76fce4dead67ec1d5f5b28f29a7270c\"" Sep 13 00:08:45.381632 containerd[1972]: time="2025-09-13T00:08:45.380693288Z" level=info msg="StartContainer for \"6e6a1275fa8bc393bb487f74b882d041e76fce4dead67ec1d5f5b28f29a7270c\"" Sep 13 00:08:45.501819 systemd[1]: Started cri-containerd-6e6a1275fa8bc393bb487f74b882d041e76fce4dead67ec1d5f5b28f29a7270c.scope - libcontainer container 6e6a1275fa8bc393bb487f74b882d041e76fce4dead67ec1d5f5b28f29a7270c. Sep 13 00:08:45.620112 containerd[1972]: time="2025-09-13T00:08:45.619151144Z" level=info msg="StartContainer for \"6e6a1275fa8bc393bb487f74b882d041e76fce4dead67ec1d5f5b28f29a7270c\" returns successfully" Sep 13 00:08:46.063144 ntpd[1952]: Listen normally on 8 vxlan.calico 192.168.65.64:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 8 vxlan.calico 192.168.65.64:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 9 califc7765d1ffe [fe80::ecee:eeff:feee:eeee%4]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 10 cali5b8bcd04067 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 11 cali63196819d08 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 12 calie2e53f0d1f2 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 13 calid75b7d3c815 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 14 cali86efde001a4 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 15 cali5805967301d [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 16 cali343be28e32b [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:08:46.064300 ntpd[1952]: 13 Sep 00:08:46 ntpd[1952]: Listen normally on 17 vxlan.calico [fe80::64e9:4cff:fe40:df78%12]:123 Sep 13 00:08:46.063224 ntpd[1952]: Listen normally on 9 califc7765d1ffe [fe80::ecee:eeff:feee:eeee%4]:123 Sep 13 00:08:46.063268 ntpd[1952]: Listen normally on 10 cali5b8bcd04067 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 13 00:08:46.063743 ntpd[1952]: Listen normally on 11 cali63196819d08 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 13 00:08:46.063795 ntpd[1952]: Listen normally on 12 calie2e53f0d1f2 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:08:46.063825 ntpd[1952]: Listen normally on 13 calid75b7d3c815 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:08:46.063853 ntpd[1952]: Listen normally on 14 cali86efde001a4 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:08:46.063880 ntpd[1952]: Listen normally on 15 cali5805967301d [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:08:46.063908 ntpd[1952]: Listen normally on 16 cali343be28e32b [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:08:46.063954 ntpd[1952]: Listen normally on 17 vxlan.calico [fe80::64e9:4cff:fe40:df78%12]:123 Sep 13 00:08:46.163834 kubelet[3206]: I0913 00:08:46.162006 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-m25gq" podStartSLOduration=25.722540241 podStartE2EDuration="35.161972027s" podCreationTimestamp="2025-09-13 00:08:11 +0000 UTC" firstStartedPulling="2025-09-13 00:08:35.829623544 +0000 UTC m=+44.621318623" lastFinishedPulling="2025-09-13 00:08:45.269055436 +0000 UTC m=+54.060750409" observedRunningTime="2025-09-13 00:08:46.140534839 +0000 UTC m=+54.932229819" watchObservedRunningTime="2025-09-13 00:08:46.161972027 +0000 UTC m=+54.953667018" Sep 13 00:08:46.807892 containerd[1972]: time="2025-09-13T00:08:46.807827257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:46.809420 containerd[1972]: time="2025-09-13T00:08:46.809315539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:08:46.811632 containerd[1972]: time="2025-09-13T00:08:46.811565735Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:46.819211 containerd[1972]: time="2025-09-13T00:08:46.819147298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:46.820703 containerd[1972]: time="2025-09-13T00:08:46.820668363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.51413589s" Sep 13 00:08:46.820703 containerd[1972]: time="2025-09-13T00:08:46.820703978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:08:46.821837 containerd[1972]: time="2025-09-13T00:08:46.821542384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:46.825483 containerd[1972]: time="2025-09-13T00:08:46.824850912Z" level=info msg="CreateContainer within sandbox \"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:08:46.906036 containerd[1972]: time="2025-09-13T00:08:46.905950110Z" level=info msg="CreateContainer within sandbox \"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a5420ea983d61f93dc6a6f5e77adede66d2123928eb1879c8276665c3e6b2003\"" Sep 13 00:08:46.909094 containerd[1972]: time="2025-09-13T00:08:46.909042553Z" level=info msg="StartContainer for \"a5420ea983d61f93dc6a6f5e77adede66d2123928eb1879c8276665c3e6b2003\"" Sep 13 00:08:46.990931 systemd[1]: run-containerd-runc-k8s.io-a5420ea983d61f93dc6a6f5e77adede66d2123928eb1879c8276665c3e6b2003-runc.ulG825.mount: Deactivated successfully. Sep 13 00:08:47.009864 systemd[1]: Started cri-containerd-a5420ea983d61f93dc6a6f5e77adede66d2123928eb1879c8276665c3e6b2003.scope - libcontainer container a5420ea983d61f93dc6a6f5e77adede66d2123928eb1879c8276665c3e6b2003. Sep 13 00:08:47.073356 containerd[1972]: time="2025-09-13T00:08:47.072830650Z" level=info msg="StartContainer for \"a5420ea983d61f93dc6a6f5e77adede66d2123928eb1879c8276665c3e6b2003\" returns successfully" Sep 13 00:08:48.328567 systemd[1]: Started sshd@11-172.31.16.22:22-139.178.89.65:43138.service - OpenSSH per-connection server daemon (139.178.89.65:43138). Sep 13 00:08:48.585207 sshd[6106]: Accepted publickey for core from 139.178.89.65 port 43138 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:48.590166 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:48.603583 systemd-logind[1957]: New session 12 of user core. Sep 13 00:08:48.608698 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:08:49.999599 sshd[6106]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:50.007807 systemd[1]: sshd@11-172.31.16.22:22-139.178.89.65:43138.service: Deactivated successfully. Sep 13 00:08:50.013377 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:08:50.016761 systemd-logind[1957]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:08:50.035590 systemd-logind[1957]: Removed session 12. Sep 13 00:08:50.043644 systemd[1]: Started sshd@12-172.31.16.22:22-139.178.89.65:48038.service - OpenSSH per-connection server daemon (139.178.89.65:48038). Sep 13 00:08:50.268874 sshd[6120]: Accepted publickey for core from 139.178.89.65 port 48038 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:50.273070 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:50.285622 systemd-logind[1957]: New session 13 of user core. Sep 13 00:08:50.290366 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:08:50.497631 containerd[1972]: time="2025-09-13T00:08:50.497160366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:50.499822 containerd[1972]: time="2025-09-13T00:08:50.499739663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:08:50.501954 containerd[1972]: time="2025-09-13T00:08:50.501879818Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:50.506674 containerd[1972]: time="2025-09-13T00:08:50.505807549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:50.513224 containerd[1972]: time="2025-09-13T00:08:50.513133517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.691539528s" Sep 13 00:08:50.513224 containerd[1972]: time="2025-09-13T00:08:50.513181959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:50.564599 containerd[1972]: time="2025-09-13T00:08:50.563888283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:08:50.646183 containerd[1972]: time="2025-09-13T00:08:50.646129679Z" level=info msg="CreateContainer within sandbox \"a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:50.674350 containerd[1972]: time="2025-09-13T00:08:50.674304041Z" level=info msg="CreateContainer within sandbox \"a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ddcd2954268ace12747e9cf47e13db3c57f42e02db66f836e6cb206ed847bb9\"" Sep 13 00:08:50.679676 containerd[1972]: time="2025-09-13T00:08:50.679178165Z" level=info msg="StartContainer for \"0ddcd2954268ace12747e9cf47e13db3c57f42e02db66f836e6cb206ed847bb9\"" Sep 13 00:08:50.777674 systemd[1]: Started cri-containerd-0ddcd2954268ace12747e9cf47e13db3c57f42e02db66f836e6cb206ed847bb9.scope - libcontainer container 0ddcd2954268ace12747e9cf47e13db3c57f42e02db66f836e6cb206ed847bb9. Sep 13 00:08:50.899931 containerd[1972]: time="2025-09-13T00:08:50.899705743Z" level=info msg="StartContainer for \"0ddcd2954268ace12747e9cf47e13db3c57f42e02db66f836e6cb206ed847bb9\" returns successfully" Sep 13 00:08:50.994993 sshd[6120]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:51.008627 systemd[1]: sshd@12-172.31.16.22:22-139.178.89.65:48038.service: Deactivated successfully. Sep 13 00:08:51.012144 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:08:51.033545 systemd-logind[1957]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:08:51.046760 systemd[1]: Started sshd@13-172.31.16.22:22-139.178.89.65:48050.service - OpenSSH per-connection server daemon (139.178.89.65:48050). Sep 13 00:08:51.052959 systemd-logind[1957]: Removed session 13. Sep 13 00:08:51.256798 sshd[6169]: Accepted publickey for core from 139.178.89.65 port 48050 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:51.257884 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:51.269357 systemd-logind[1957]: New session 14 of user core. Sep 13 00:08:51.273645 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:08:52.058961 sshd[6169]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:52.067063 systemd[1]: sshd@13-172.31.16.22:22-139.178.89.65:48050.service: Deactivated successfully. Sep 13 00:08:52.072296 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:08:52.079828 systemd-logind[1957]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:08:52.088534 systemd-logind[1957]: Removed session 14. Sep 13 00:08:52.159795 containerd[1972]: time="2025-09-13T00:08:52.159733577Z" level=info msg="StopPodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\"" Sep 13 00:08:52.288529 kubelet[3206]: I0913 00:08:52.270342 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7768f9b8-rx6mc" podStartSLOduration=33.153388483 podStartE2EDuration="45.225523174s" podCreationTimestamp="2025-09-13 00:08:07 +0000 UTC" firstStartedPulling="2025-09-13 00:08:38.477587736 +0000 UTC m=+47.269282704" lastFinishedPulling="2025-09-13 00:08:50.549722414 +0000 UTC m=+59.341417395" observedRunningTime="2025-09-13 00:08:52.221054798 +0000 UTC m=+61.012749780" watchObservedRunningTime="2025-09-13 00:08:52.225523174 +0000 UTC m=+61.017218159" Sep 13 00:08:53.178730 kubelet[3206]: I0913 00:08:53.172054 3206 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:52.786 [WARNING][6198] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d46336d8-eb49-44a4-a6a0-97396eeb5284", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0", Pod:"csi-node-driver-2hfvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2e53f0d1f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:52.795 [INFO][6198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:52.795 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" iface="eth0" netns="" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:52.795 [INFO][6198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:52.795 [INFO][6198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.247 [INFO][6207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.249 [INFO][6207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.251 [INFO][6207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.268 [WARNING][6207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.268 [INFO][6207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.270 [INFO][6207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:53.274659 containerd[1972]: 2025-09-13 00:08:53.272 [INFO][6198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.291754 containerd[1972]: time="2025-09-13T00:08:53.291699175Z" level=info msg="TearDown network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" successfully" Sep 13 00:08:53.293111 containerd[1972]: time="2025-09-13T00:08:53.291926069Z" level=info msg="StopPodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" returns successfully" Sep 13 00:08:53.541623 containerd[1972]: time="2025-09-13T00:08:53.541468520Z" level=info msg="RemovePodSandbox for \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\"" Sep 13 00:08:53.567997 containerd[1972]: time="2025-09-13T00:08:53.546939841Z" level=info msg="Forcibly stopping sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\"" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.625 [WARNING][6230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d46336d8-eb49-44a4-a6a0-97396eeb5284", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0", Pod:"csi-node-driver-2hfvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2e53f0d1f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.625 [INFO][6230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.625 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" iface="eth0" netns="" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.626 [INFO][6230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.626 [INFO][6230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.661 [INFO][6237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.661 [INFO][6237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.661 [INFO][6237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.669 [WARNING][6237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.669 [INFO][6237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" HandleID="k8s-pod-network.0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Workload="ip--172--31--16--22-k8s-csi--node--driver--2hfvw-eth0" Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.672 [INFO][6237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:53.680458 containerd[1972]: 2025-09-13 00:08:53.676 [INFO][6230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92" Sep 13 00:08:53.682277 containerd[1972]: time="2025-09-13T00:08:53.680928636Z" level=info msg="TearDown network for sandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" successfully" Sep 13 00:08:53.714121 containerd[1972]: time="2025-09-13T00:08:53.714053646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:53.736699 containerd[1972]: time="2025-09-13T00:08:53.736451298Z" level=info msg="RemovePodSandbox \"0f5ac97439fc948625f0bd360b86669ea4fa75ad3d76bd25b0dd5ec330eabf92\" returns successfully" Sep 13 00:08:53.757677 containerd[1972]: time="2025-09-13T00:08:53.757627115Z" level=info msg="StopPodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\"" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.830 [WARNING][6251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c5637eb-c80d-4a9c-92f1-4b8bb3195348", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd", Pod:"calico-apiserver-6c7768f9b8-wmtxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali343be28e32b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.830 [INFO][6251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.831 [INFO][6251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" iface="eth0" netns="" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.831 [INFO][6251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.831 [INFO][6251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.888 [INFO][6260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.888 [INFO][6260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.888 [INFO][6260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.898 [WARNING][6260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.898 [INFO][6260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.900 [INFO][6260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:53.908902 containerd[1972]: 2025-09-13 00:08:53.904 [INFO][6251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:53.912305 containerd[1972]: time="2025-09-13T00:08:53.909084218Z" level=info msg="TearDown network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" successfully" Sep 13 00:08:53.912305 containerd[1972]: time="2025-09-13T00:08:53.909995300Z" level=info msg="StopPodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" returns successfully" Sep 13 00:08:53.912305 containerd[1972]: time="2025-09-13T00:08:53.910944092Z" level=info msg="RemovePodSandbox for \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\"" Sep 13 00:08:53.912305 containerd[1972]: time="2025-09-13T00:08:53.910981066Z" level=info msg="Forcibly stopping sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\"" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.039 [WARNING][6277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c5637eb-c80d-4a9c-92f1-4b8bb3195348", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd", Pod:"calico-apiserver-6c7768f9b8-wmtxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali343be28e32b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.039 [INFO][6277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.039 [INFO][6277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" iface="eth0" netns="" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.039 [INFO][6277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.039 [INFO][6277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.070 [INFO][6285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.070 [INFO][6285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.070 [INFO][6285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.078 [WARNING][6285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.078 [INFO][6285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" HandleID="k8s-pod-network.29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--wmtxh-eth0" Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.081 [INFO][6285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.086976 containerd[1972]: 2025-09-13 00:08:54.084 [INFO][6277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec" Sep 13 00:08:54.088099 containerd[1972]: time="2025-09-13T00:08:54.087076578Z" level=info msg="TearDown network for sandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" successfully" Sep 13 00:08:54.095580 containerd[1972]: time="2025-09-13T00:08:54.095520583Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:54.096207 containerd[1972]: time="2025-09-13T00:08:54.095596858Z" level=info msg="RemovePodSandbox \"29c1bb037bd563f2ffab97a45726385ebd8524d4b17724a23e1f80f361df6aec\" returns successfully" Sep 13 00:08:54.103467 containerd[1972]: time="2025-09-13T00:08:54.102997507Z" level=info msg="StopPodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\"" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.149 [WARNING][6299] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.149 [INFO][6299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.149 [INFO][6299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" iface="eth0" netns="" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.149 [INFO][6299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.149 [INFO][6299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.193 [INFO][6306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.193 [INFO][6306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.193 [INFO][6306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.201 [WARNING][6306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.201 [INFO][6306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.204 [INFO][6306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.208837 containerd[1972]: 2025-09-13 00:08:54.206 [INFO][6299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.210285 containerd[1972]: time="2025-09-13T00:08:54.208888647Z" level=info msg="TearDown network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" successfully" Sep 13 00:08:54.210285 containerd[1972]: time="2025-09-13T00:08:54.208929869Z" level=info msg="StopPodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" returns successfully" Sep 13 00:08:54.210285 containerd[1972]: time="2025-09-13T00:08:54.209926496Z" level=info msg="RemovePodSandbox for \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\"" Sep 13 00:08:54.210285 containerd[1972]: time="2025-09-13T00:08:54.209960772Z" level=info msg="Forcibly stopping sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\"" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.267 [WARNING][6320] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" WorkloadEndpoint="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.267 [INFO][6320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.267 [INFO][6320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" iface="eth0" netns="" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.267 [INFO][6320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.268 [INFO][6320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.326 [INFO][6327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.327 [INFO][6327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.327 [INFO][6327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.343 [WARNING][6327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.343 [INFO][6327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" HandleID="k8s-pod-network.2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Workload="ip--172--31--16--22-k8s-whisker--8c785d7cc--7x628-eth0" Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.345 [INFO][6327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.355128 containerd[1972]: 2025-09-13 00:08:54.350 [INFO][6320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59" Sep 13 00:08:54.356567 containerd[1972]: time="2025-09-13T00:08:54.355176444Z" level=info msg="TearDown network for sandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" successfully" Sep 13 00:08:54.366022 containerd[1972]: time="2025-09-13T00:08:54.365963883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:54.366165 containerd[1972]: time="2025-09-13T00:08:54.366063403Z" level=info msg="RemovePodSandbox \"2c3cbb32475e85d6c4e89d184e11fe222524a519800e6c5fc9ad8b31d8090c59\" returns successfully" Sep 13 00:08:54.367834 containerd[1972]: time="2025-09-13T00:08:54.367451220Z" level=info msg="StopPodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\"" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.461 [WARNING][6341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"790eb649-ffde-4aab-abe6-6ba328fbc032", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5", Pod:"coredns-668d6bf9bc-dhbhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b8bcd04067", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.462 [INFO][6341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.462 [INFO][6341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" iface="eth0" netns="" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.462 [INFO][6341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.462 [INFO][6341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.499 [INFO][6349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.499 [INFO][6349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.499 [INFO][6349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.506 [WARNING][6349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.506 [INFO][6349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.508 [INFO][6349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.515549 containerd[1972]: 2025-09-13 00:08:54.511 [INFO][6341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.515549 containerd[1972]: time="2025-09-13T00:08:54.513870977Z" level=info msg="TearDown network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" successfully" Sep 13 00:08:54.515549 containerd[1972]: time="2025-09-13T00:08:54.513901781Z" level=info msg="StopPodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" returns successfully" Sep 13 00:08:54.519019 containerd[1972]: time="2025-09-13T00:08:54.518942317Z" level=info msg="RemovePodSandbox for \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\"" Sep 13 00:08:54.519127 containerd[1972]: time="2025-09-13T00:08:54.519023458Z" level=info msg="Forcibly stopping sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\"" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.573 [WARNING][6363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"790eb649-ffde-4aab-abe6-6ba328fbc032", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"682287f1286be9713818db58f0d0f5bb59a85c5edab900e27d75a071a1f0ccb5", Pod:"coredns-668d6bf9bc-dhbhm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b8bcd04067", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.573 [INFO][6363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.573 [INFO][6363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" iface="eth0" netns="" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.574 [INFO][6363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.574 [INFO][6363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.599 [INFO][6370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.601 [INFO][6370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.601 [INFO][6370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.612 [WARNING][6370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.612 [INFO][6370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" HandleID="k8s-pod-network.3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--dhbhm-eth0" Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.614 [INFO][6370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.620016 containerd[1972]: 2025-09-13 00:08:54.617 [INFO][6363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159" Sep 13 00:08:54.622216 containerd[1972]: time="2025-09-13T00:08:54.620225760Z" level=info msg="TearDown network for sandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" successfully" Sep 13 00:08:54.628405 containerd[1972]: time="2025-09-13T00:08:54.628188842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:54.628405 containerd[1972]: time="2025-09-13T00:08:54.628267853Z" level=info msg="RemovePodSandbox \"3d59300d429187836f433edbb3c795c10e6188e76866c013952aaac02e2f1159\" returns successfully" Sep 13 00:08:54.629654 containerd[1972]: time="2025-09-13T00:08:54.629484039Z" level=info msg="StopPodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\"" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.675 [WARNING][6385] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c763216-54a0-463c-943a-eb519ffb6816", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312", Pod:"calico-apiserver-6c7768f9b8-rx6mc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid75b7d3c815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.676 [INFO][6385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.676 [INFO][6385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" iface="eth0" netns="" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.676 [INFO][6385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.676 [INFO][6385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.713 [INFO][6392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.713 [INFO][6392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.713 [INFO][6392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.724 [WARNING][6392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.724 [INFO][6392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.728 [INFO][6392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.732936 containerd[1972]: 2025-09-13 00:08:54.730 [INFO][6385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.735769 containerd[1972]: time="2025-09-13T00:08:54.733547953Z" level=info msg="TearDown network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" successfully" Sep 13 00:08:54.735769 containerd[1972]: time="2025-09-13T00:08:54.733574716Z" level=info msg="StopPodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" returns successfully" Sep 13 00:08:54.735769 containerd[1972]: time="2025-09-13T00:08:54.734289508Z" level=info msg="RemovePodSandbox for \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\"" Sep 13 00:08:54.735769 containerd[1972]: time="2025-09-13T00:08:54.734315906Z" level=info msg="Forcibly stopping sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\"" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.780 [WARNING][6407] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0", GenerateName:"calico-apiserver-6c7768f9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c763216-54a0-463c-943a-eb519ffb6816", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7768f9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"a69c851cab146ea3ebaaad6fa7760082993eb7c19c4791307c630fec6e326312", Pod:"calico-apiserver-6c7768f9b8-rx6mc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid75b7d3c815", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.780 [INFO][6407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.780 [INFO][6407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" iface="eth0" netns="" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.780 [INFO][6407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.780 [INFO][6407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.817 [INFO][6414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.818 [INFO][6414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.818 [INFO][6414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.832 [WARNING][6414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.833 [INFO][6414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" HandleID="k8s-pod-network.aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Workload="ip--172--31--16--22-k8s-calico--apiserver--6c7768f9b8--rx6mc-eth0" Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.836 [INFO][6414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:54.841972 containerd[1972]: 2025-09-13 00:08:54.839 [INFO][6407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a" Sep 13 00:08:54.842832 containerd[1972]: time="2025-09-13T00:08:54.842027693Z" level=info msg="TearDown network for sandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" successfully" Sep 13 00:08:54.930407 containerd[1972]: time="2025-09-13T00:08:54.930250433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:54.930575 containerd[1972]: time="2025-09-13T00:08:54.930545932Z" level=info msg="RemovePodSandbox \"aead28511fe22dba759ccb8b4cec37cef09661bb43246adda783308dcb1b376a\" returns successfully" Sep 13 00:08:54.955890 containerd[1972]: time="2025-09-13T00:08:54.955731591Z" level=info msg="StopPodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\"" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.082 [WARNING][6429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0", GenerateName:"calico-kube-controllers-86d996465f-", Namespace:"calico-system", SelfLink:"", UID:"48dee0db-351b-4286-ac8b-c0ef5144392c", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d996465f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30", Pod:"calico-kube-controllers-86d996465f-bdr2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5805967301d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.083 [INFO][6429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.083 [INFO][6429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" iface="eth0" netns="" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.083 [INFO][6429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.084 [INFO][6429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.154 [INFO][6436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.155 [INFO][6436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.155 [INFO][6436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.164 [WARNING][6436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.165 [INFO][6436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.167 [INFO][6436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:55.174321 containerd[1972]: 2025-09-13 00:08:55.171 [INFO][6429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.175538 containerd[1972]: time="2025-09-13T00:08:55.175254460Z" level=info msg="TearDown network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" successfully" Sep 13 00:08:55.175538 containerd[1972]: time="2025-09-13T00:08:55.175305293Z" level=info msg="StopPodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" returns successfully" Sep 13 00:08:55.176444 containerd[1972]: time="2025-09-13T00:08:55.176023763Z" level=info msg="RemovePodSandbox for \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\"" Sep 13 00:08:55.176444 containerd[1972]: time="2025-09-13T00:08:55.176063369Z" level=info msg="Forcibly stopping sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\"" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.243 [WARNING][6450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0", GenerateName:"calico-kube-controllers-86d996465f-", Namespace:"calico-system", SelfLink:"", UID:"48dee0db-351b-4286-ac8b-c0ef5144392c", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d996465f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30", Pod:"calico-kube-controllers-86d996465f-bdr2b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5805967301d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.245 [INFO][6450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.246 [INFO][6450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" iface="eth0" netns="" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.246 [INFO][6450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.246 [INFO][6450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.303 [INFO][6461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.305 [INFO][6461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.305 [INFO][6461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.320 [WARNING][6461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.320 [INFO][6461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" HandleID="k8s-pod-network.3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Workload="ip--172--31--16--22-k8s-calico--kube--controllers--86d996465f--bdr2b-eth0" Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.325 [INFO][6461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:55.348489 containerd[1972]: 2025-09-13 00:08:55.336 [INFO][6450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716" Sep 13 00:08:55.350419 containerd[1972]: time="2025-09-13T00:08:55.349254309Z" level=info msg="TearDown network for sandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" successfully" Sep 13 00:08:55.362462 containerd[1972]: time="2025-09-13T00:08:55.362413468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:55.367163 containerd[1972]: time="2025-09-13T00:08:55.366712331Z" level=info msg="RemovePodSandbox \"3c6eabe8d60ca4ef9606116e4a0d778104624894cafb60b1d7964197368a0716\" returns successfully" Sep 13 00:08:55.372862 containerd[1972]: time="2025-09-13T00:08:55.371809523Z" level=info msg="StopPodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\"" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.507 [WARNING][6475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c150b87b-3909-4283-ab15-dcc8b6a0c68d", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a", Pod:"goldmane-54d579b49d-m25gq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63196819d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.509 [INFO][6475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.509 [INFO][6475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" iface="eth0" netns="" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.509 [INFO][6475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.509 [INFO][6475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.573 [INFO][6482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.574 [INFO][6482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.574 [INFO][6482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.586 [WARNING][6482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.586 [INFO][6482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.589 [INFO][6482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:55.599423 containerd[1972]: 2025-09-13 00:08:55.593 [INFO][6475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.602092 containerd[1972]: time="2025-09-13T00:08:55.601756989Z" level=info msg="TearDown network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" successfully" Sep 13 00:08:55.602092 containerd[1972]: time="2025-09-13T00:08:55.601786536Z" level=info msg="StopPodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" returns successfully" Sep 13 00:08:55.602853 containerd[1972]: time="2025-09-13T00:08:55.602813900Z" level=info msg="RemovePodSandbox for \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\"" Sep 13 00:08:55.603551 containerd[1972]: time="2025-09-13T00:08:55.602854847Z" level=info msg="Forcibly stopping sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\"" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.689 [WARNING][6496] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c150b87b-3909-4283-ab15-dcc8b6a0c68d", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"c6fbd8ce6cd101b463d62d1b5b53c811d7e0099ce2b175ad462749f3b782818a", Pod:"goldmane-54d579b49d-m25gq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63196819d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.690 [INFO][6496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.690 [INFO][6496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" iface="eth0" netns="" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.690 [INFO][6496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.690 [INFO][6496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.743 [INFO][6504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.743 [INFO][6504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.743 [INFO][6504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.756 [WARNING][6504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.756 [INFO][6504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" HandleID="k8s-pod-network.3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Workload="ip--172--31--16--22-k8s-goldmane--54d579b49d--m25gq-eth0" Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.765 [INFO][6504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:55.774060 containerd[1972]: 2025-09-13 00:08:55.770 [INFO][6496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e" Sep 13 00:08:55.774060 containerd[1972]: time="2025-09-13T00:08:55.773957981Z" level=info msg="TearDown network for sandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" successfully" Sep 13 00:08:55.780353 containerd[1972]: time="2025-09-13T00:08:55.778982322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:55.780353 containerd[1972]: time="2025-09-13T00:08:55.779068013Z" level=info msg="RemovePodSandbox \"3edf0c2631e5a8e9c53e5920e2c3555b905156af09a531619dc709405b79c31e\" returns successfully" Sep 13 00:08:55.782858 containerd[1972]: time="2025-09-13T00:08:55.782501748Z" level=info msg="StopPodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\"" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.864 [WARNING][6518] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa", Pod:"coredns-668d6bf9bc-qmtk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86efde001a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.864 [INFO][6518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.864 [INFO][6518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" iface="eth0" netns="" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.864 [INFO][6518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.864 [INFO][6518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.928 [INFO][6525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.929 [INFO][6525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.929 [INFO][6525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.942 [WARNING][6525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.942 [INFO][6525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.947 [INFO][6525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:55.959583 containerd[1972]: 2025-09-13 00:08:55.953 [INFO][6518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:55.962189 containerd[1972]: time="2025-09-13T00:08:55.961479417Z" level=info msg="TearDown network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" successfully" Sep 13 00:08:55.962189 containerd[1972]: time="2025-09-13T00:08:55.961535197Z" level=info msg="StopPodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" returns successfully" Sep 13 00:08:55.962864 containerd[1972]: time="2025-09-13T00:08:55.962829668Z" level=info msg="RemovePodSandbox for \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\"" Sep 13 00:08:55.962986 containerd[1972]: time="2025-09-13T00:08:55.962868715Z" level=info msg="Forcibly stopping sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\"" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.051 [WARNING][6540] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2ce4e15f-f6d2-489b-b74d-a800f2ee80ad", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-22", ContainerID:"9f4cc77749d84413c8667f3e9d1d6991bbdef4750946c8da008306b53eb628fa", Pod:"coredns-668d6bf9bc-qmtk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86efde001a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.052 [INFO][6540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.052 [INFO][6540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" iface="eth0" netns="" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.052 [INFO][6540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.052 [INFO][6540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.108 [INFO][6547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.108 [INFO][6547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.109 [INFO][6547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.123 [WARNING][6547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.123 [INFO][6547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" HandleID="k8s-pod-network.e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Workload="ip--172--31--16--22-k8s-coredns--668d6bf9bc--qmtk8-eth0" Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.125 [INFO][6547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:56.135159 containerd[1972]: 2025-09-13 00:08:56.129 [INFO][6540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069" Sep 13 00:08:56.137417 containerd[1972]: time="2025-09-13T00:08:56.136483540Z" level=info msg="TearDown network for sandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" successfully" Sep 13 00:08:56.144225 containerd[1972]: time="2025-09-13T00:08:56.144175073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:56.144527 containerd[1972]: time="2025-09-13T00:08:56.144481180Z" level=info msg="RemovePodSandbox \"e997a492fb00a7dfb2f341ff4353c8a323d25a50fff191dcd07af74fa543c069\" returns successfully" Sep 13 00:08:57.133187 systemd[1]: Started sshd@14-172.31.16.22:22-139.178.89.65:48056.service - OpenSSH per-connection server daemon (139.178.89.65:48056). Sep 13 00:08:57.137103 containerd[1972]: time="2025-09-13T00:08:57.137059775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.145424 containerd[1972]: time="2025-09-13T00:08:57.142848932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:08:57.145424 containerd[1972]: time="2025-09-13T00:08:57.145142608Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.171052 containerd[1972]: time="2025-09-13T00:08:57.170693782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.186264 containerd[1972]: time="2025-09-13T00:08:57.185459356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.614061178s" Sep 13 00:08:57.186264 containerd[1972]: time="2025-09-13T00:08:57.185529396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:08:57.327768 containerd[1972]: time="2025-09-13T00:08:57.326242746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:57.455969 sshd[6554]: Accepted publickey for core from 139.178.89.65 port 48056 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:08:57.457322 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:57.470152 systemd-logind[1957]: New session 15 of user core. Sep 13 00:08:57.478630 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:08:57.526213 containerd[1972]: time="2025-09-13T00:08:57.526175870Z" level=info msg="CreateContainer within sandbox \"207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:08:57.564088 containerd[1972]: time="2025-09-13T00:08:57.564040973Z" level=info msg="CreateContainer within sandbox \"207c742e58ac11a5acff4bf32c5ac7b4efcdfbd24f46e3fe311de1a5e4370f30\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d19a50dd9eaca4c5c7dfd161b7fda5781c63833ec2ee5bd69998213df1ff3380\"" Sep 13 00:08:57.565618 containerd[1972]: time="2025-09-13T00:08:57.564864739Z" level=info msg="StartContainer for \"d19a50dd9eaca4c5c7dfd161b7fda5781c63833ec2ee5bd69998213df1ff3380\"" Sep 13 00:08:57.692515 containerd[1972]: time="2025-09-13T00:08:57.691808228Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:57.694474 containerd[1972]: time="2025-09-13T00:08:57.694256167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:08:57.700444 containerd[1972]: time="2025-09-13T00:08:57.700377288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 374.070341ms" Sep 13 00:08:57.701447 containerd[1972]: time="2025-09-13T00:08:57.700449626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:57.736409 containerd[1972]: time="2025-09-13T00:08:57.736236228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:08:57.739879 containerd[1972]: time="2025-09-13T00:08:57.739821140Z" level=info msg="CreateContainer within sandbox \"f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:57.814625 containerd[1972]: time="2025-09-13T00:08:57.814481222Z" level=info msg="CreateContainer within sandbox \"f4cb4bd92a4b9cb77cb354f8dc3f3f183bca6c2e9ac6d58ef82040cf0ece60dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09d7574e05da4d64dcbee3f8035e7cf9b7b0293062ef737fd3522abdb8fa5da2\"" Sep 13 00:08:57.819739 containerd[1972]: time="2025-09-13T00:08:57.818835350Z" level=info msg="StartContainer for \"09d7574e05da4d64dcbee3f8035e7cf9b7b0293062ef737fd3522abdb8fa5da2\"" Sep 13 00:08:58.011792 systemd[1]: Started cri-containerd-09d7574e05da4d64dcbee3f8035e7cf9b7b0293062ef737fd3522abdb8fa5da2.scope - libcontainer container 09d7574e05da4d64dcbee3f8035e7cf9b7b0293062ef737fd3522abdb8fa5da2. Sep 13 00:08:58.012876 systemd[1]: Started cri-containerd-d19a50dd9eaca4c5c7dfd161b7fda5781c63833ec2ee5bd69998213df1ff3380.scope - libcontainer container d19a50dd9eaca4c5c7dfd161b7fda5781c63833ec2ee5bd69998213df1ff3380. Sep 13 00:08:58.181402 containerd[1972]: time="2025-09-13T00:08:58.180653042Z" level=info msg="StartContainer for \"09d7574e05da4d64dcbee3f8035e7cf9b7b0293062ef737fd3522abdb8fa5da2\" returns successfully" Sep 13 00:08:58.201280 containerd[1972]: time="2025-09-13T00:08:58.198217901Z" level=info msg="StartContainer for \"d19a50dd9eaca4c5c7dfd161b7fda5781c63833ec2ee5bd69998213df1ff3380\" returns successfully" Sep 13 00:08:59.325081 sshd[6554]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:59.331349 systemd[1]: sshd@14-172.31.16.22:22-139.178.89.65:48056.service: Deactivated successfully. Sep 13 00:08:59.338378 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:08:59.345096 systemd-logind[1957]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:08:59.346989 systemd-logind[1957]: Removed session 15. Sep 13 00:08:59.424331 kubelet[3206]: I0913 00:08:59.424185 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7768f9b8-wmtxh" podStartSLOduration=34.559154875 podStartE2EDuration="52.373101724s" podCreationTimestamp="2025-09-13 00:08:07 +0000 UTC" firstStartedPulling="2025-09-13 00:08:39.919034974 +0000 UTC m=+48.710729956" lastFinishedPulling="2025-09-13 00:08:57.732981822 +0000 UTC m=+66.524676805" observedRunningTime="2025-09-13 00:08:59.116067677 +0000 UTC m=+67.907762658" watchObservedRunningTime="2025-09-13 00:08:59.373101724 +0000 UTC m=+68.164796714" Sep 13 00:08:59.427564 kubelet[3206]: I0913 00:08:59.425334 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86d996465f-bdr2b" podStartSLOduration=30.832483729 podStartE2EDuration="48.425316164s" podCreationTimestamp="2025-09-13 00:08:11 +0000 UTC" firstStartedPulling="2025-09-13 00:08:39.733034812 +0000 UTC m=+48.524729795" lastFinishedPulling="2025-09-13 00:08:57.325867249 +0000 UTC m=+66.117562230" observedRunningTime="2025-09-13 00:08:59.341829089 +0000 UTC m=+68.133524076" watchObservedRunningTime="2025-09-13 00:08:59.425316164 +0000 UTC m=+68.217011154" Sep 13 00:09:00.931707 containerd[1972]: time="2025-09-13T00:09:00.931614918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:00.933683 containerd[1972]: time="2025-09-13T00:09:00.933552240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:09:00.938064 containerd[1972]: time="2025-09-13T00:09:00.936805563Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:00.940759 containerd[1972]: time="2025-09-13T00:09:00.940678584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:09:00.941992 containerd[1972]: time="2025-09-13T00:09:00.941894217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.205292578s" Sep 13 00:09:00.941992 containerd[1972]: time="2025-09-13T00:09:00.941948323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:09:01.023527 containerd[1972]: time="2025-09-13T00:09:01.023465299Z" level=info msg="CreateContainer within sandbox \"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:09:01.111420 containerd[1972]: time="2025-09-13T00:09:01.111144095Z" level=info msg="CreateContainer within sandbox \"f7cfa7ce7f55a3fdaba569361d9ea2c63fadd2db962a695ce5e5e8d426a8d2a0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c9e593acc3de7e1f1715309929608555918dee8096467d51af235a2cf7208917\"" Sep 13 00:09:01.132413 containerd[1972]: time="2025-09-13T00:09:01.131375879Z" level=info msg="StartContainer for \"c9e593acc3de7e1f1715309929608555918dee8096467d51af235a2cf7208917\"" Sep 13 00:09:01.249674 systemd[1]: Started cri-containerd-c9e593acc3de7e1f1715309929608555918dee8096467d51af235a2cf7208917.scope - libcontainer container c9e593acc3de7e1f1715309929608555918dee8096467d51af235a2cf7208917. Sep 13 00:09:01.330969 containerd[1972]: time="2025-09-13T00:09:01.330837111Z" level=info msg="StartContainer for \"c9e593acc3de7e1f1715309929608555918dee8096467d51af235a2cf7208917\" returns successfully" Sep 13 00:09:01.753074 kubelet[3206]: I0913 00:09:01.752969 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2hfvw" podStartSLOduration=27.002638176 podStartE2EDuration="50.752946259s" podCreationTimestamp="2025-09-13 00:08:11 +0000 UTC" firstStartedPulling="2025-09-13 00:08:37.235597532 +0000 UTC m=+46.027292512" lastFinishedPulling="2025-09-13 00:09:00.985905617 +0000 UTC m=+69.777600595" observedRunningTime="2025-09-13 00:09:01.751377596 +0000 UTC m=+70.543072582" watchObservedRunningTime="2025-09-13 00:09:01.752946259 +0000 UTC m=+70.544641389" Sep 13 00:09:01.758890 kubelet[3206]: I0913 00:09:01.755919 3206 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:09:01.758890 kubelet[3206]: I0913 00:09:01.758729 3206 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:09:04.395513 systemd[1]: Started sshd@15-172.31.16.22:22-139.178.89.65:58514.service - OpenSSH per-connection server daemon (139.178.89.65:58514). Sep 13 00:09:04.664858 sshd[6754]: Accepted publickey for core from 139.178.89.65 port 58514 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:04.668416 sshd[6754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:04.676008 systemd-logind[1957]: New session 16 of user core. Sep 13 00:09:04.681978 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:09:05.728130 sshd[6754]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:05.731473 systemd[1]: sshd@15-172.31.16.22:22-139.178.89.65:58514.service: Deactivated successfully. Sep 13 00:09:05.733521 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:09:05.735045 systemd-logind[1957]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:09:05.736741 systemd-logind[1957]: Removed session 16. Sep 13 00:09:10.767796 systemd[1]: Started sshd@16-172.31.16.22:22-139.178.89.65:38606.service - OpenSSH per-connection server daemon (139.178.89.65:38606). Sep 13 00:09:11.041479 sshd[6770]: Accepted publickey for core from 139.178.89.65 port 38606 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:11.043971 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:11.050478 systemd-logind[1957]: New session 17 of user core. Sep 13 00:09:11.054637 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:09:11.855949 sshd[6770]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:11.862256 systemd-logind[1957]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:09:11.862556 systemd[1]: sshd@16-172.31.16.22:22-139.178.89.65:38606.service: Deactivated successfully. Sep 13 00:09:11.865082 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:09:11.866161 systemd-logind[1957]: Removed session 17. Sep 13 00:09:15.199906 kubelet[3206]: I0913 00:09:15.199841 3206 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:09:16.894190 systemd[1]: Started sshd@17-172.31.16.22:22-139.178.89.65:38614.service - OpenSSH per-connection server daemon (139.178.89.65:38614). Sep 13 00:09:17.188421 sshd[6806]: Accepted publickey for core from 139.178.89.65 port 38614 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:17.190712 sshd[6806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:17.208902 systemd-logind[1957]: New session 18 of user core. Sep 13 00:09:17.214944 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:09:18.152548 sshd[6806]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:18.156865 systemd[1]: sshd@17-172.31.16.22:22-139.178.89.65:38614.service: Deactivated successfully. Sep 13 00:09:18.158710 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:09:18.159790 systemd-logind[1957]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:09:18.161870 systemd-logind[1957]: Removed session 18. Sep 13 00:09:18.191092 systemd[1]: Started sshd@18-172.31.16.22:22-139.178.89.65:38630.service - OpenSSH per-connection server daemon (139.178.89.65:38630). Sep 13 00:09:18.357876 sshd[6819]: Accepted publickey for core from 139.178.89.65 port 38630 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:18.359325 sshd[6819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:18.365588 systemd-logind[1957]: New session 19 of user core. Sep 13 00:09:18.369738 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:09:19.022606 sshd[6819]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:19.029501 systemd[1]: sshd@18-172.31.16.22:22-139.178.89.65:38630.service: Deactivated successfully. Sep 13 00:09:19.032817 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:09:19.036079 systemd-logind[1957]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:09:19.039016 systemd-logind[1957]: Removed session 19. Sep 13 00:09:19.065857 systemd[1]: Started sshd@19-172.31.16.22:22-139.178.89.65:38632.service - OpenSSH per-connection server daemon (139.178.89.65:38632). Sep 13 00:09:19.253732 sshd[6830]: Accepted publickey for core from 139.178.89.65 port 38632 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:19.257003 sshd[6830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:19.262653 systemd-logind[1957]: New session 20 of user core. Sep 13 00:09:19.268629 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:09:20.160315 sshd[6830]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:20.173153 systemd[1]: sshd@19-172.31.16.22:22-139.178.89.65:38632.service: Deactivated successfully. Sep 13 00:09:20.175728 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:09:20.177124 systemd-logind[1957]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:09:20.191728 systemd[1]: Started sshd@20-172.31.16.22:22-139.178.89.65:57604.service - OpenSSH per-connection server daemon (139.178.89.65:57604). Sep 13 00:09:20.192566 systemd-logind[1957]: Removed session 20. Sep 13 00:09:20.446110 sshd[6848]: Accepted publickey for core from 139.178.89.65 port 57604 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:20.447329 sshd[6848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:20.454227 systemd-logind[1957]: New session 21 of user core. Sep 13 00:09:20.459659 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:09:21.389779 sshd[6848]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:21.407734 systemd[1]: sshd@20-172.31.16.22:22-139.178.89.65:57604.service: Deactivated successfully. Sep 13 00:09:21.416119 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:09:21.420660 systemd-logind[1957]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:09:21.443509 systemd[1]: Started sshd@21-172.31.16.22:22-139.178.89.65:57620.service - OpenSSH per-connection server daemon (139.178.89.65:57620). Sep 13 00:09:21.445320 systemd-logind[1957]: Removed session 21. Sep 13 00:09:21.652690 sshd[6859]: Accepted publickey for core from 139.178.89.65 port 57620 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:21.655450 sshd[6859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:21.661941 systemd-logind[1957]: New session 22 of user core. Sep 13 00:09:21.665667 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:09:22.032274 sshd[6859]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:22.037424 systemd[1]: sshd@21-172.31.16.22:22-139.178.89.65:57620.service: Deactivated successfully. Sep 13 00:09:22.039809 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:09:22.041120 systemd-logind[1957]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:09:22.042409 systemd-logind[1957]: Removed session 22. Sep 13 00:09:27.068650 systemd[1]: Started sshd@22-172.31.16.22:22-139.178.89.65:57636.service - OpenSSH per-connection server daemon (139.178.89.65:57636). Sep 13 00:09:27.325888 sshd[6882]: Accepted publickey for core from 139.178.89.65 port 57636 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:27.328106 sshd[6882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:27.333873 systemd-logind[1957]: New session 23 of user core. Sep 13 00:09:27.339651 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:09:27.903677 sshd[6882]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:27.907847 systemd[1]: sshd@22-172.31.16.22:22-139.178.89.65:57636.service: Deactivated successfully. Sep 13 00:09:27.910121 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:09:27.911600 systemd-logind[1957]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:09:27.913565 systemd-logind[1957]: Removed session 23. Sep 13 00:09:28.585131 systemd[1]: run-containerd-runc-k8s.io-6e6a1275fa8bc393bb487f74b882d041e76fce4dead67ec1d5f5b28f29a7270c-runc.2dkd7A.mount: Deactivated successfully. Sep 13 00:09:32.953885 systemd[1]: Started sshd@23-172.31.16.22:22-139.178.89.65:42420.service - OpenSSH per-connection server daemon (139.178.89.65:42420). Sep 13 00:09:33.258496 sshd[6956]: Accepted publickey for core from 139.178.89.65 port 42420 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:33.260346 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:33.275475 systemd-logind[1957]: New session 24 of user core. Sep 13 00:09:33.278652 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:09:34.569668 sshd[6956]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:34.574004 systemd[1]: sshd@23-172.31.16.22:22-139.178.89.65:42420.service: Deactivated successfully. Sep 13 00:09:34.578299 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:09:34.586463 systemd-logind[1957]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:09:34.591451 systemd-logind[1957]: Removed session 24. Sep 13 00:09:39.605067 systemd[1]: Started sshd@24-172.31.16.22:22-139.178.89.65:42432.service - OpenSSH per-connection server daemon (139.178.89.65:42432). Sep 13 00:09:39.807589 sshd[6972]: Accepted publickey for core from 139.178.89.65 port 42432 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:39.809528 sshd[6972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:39.818075 systemd-logind[1957]: New session 25 of user core. Sep 13 00:09:39.827491 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:09:40.365565 sshd[6972]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:40.372045 systemd-logind[1957]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:09:40.375528 systemd[1]: sshd@24-172.31.16.22:22-139.178.89.65:42432.service: Deactivated successfully. Sep 13 00:09:40.380146 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:09:40.382030 systemd-logind[1957]: Removed session 25. Sep 13 00:09:45.404221 systemd[1]: Started sshd@25-172.31.16.22:22-139.178.89.65:39312.service - OpenSSH per-connection server daemon (139.178.89.65:39312). Sep 13 00:09:45.634155 sshd[7004]: Accepted publickey for core from 139.178.89.65 port 39312 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:45.637171 sshd[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:45.645985 systemd-logind[1957]: New session 26 of user core. Sep 13 00:09:45.654540 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:09:45.989014 sshd[7004]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:45.996835 systemd-logind[1957]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:09:45.999665 systemd[1]: sshd@25-172.31.16.22:22-139.178.89.65:39312.service: Deactivated successfully. Sep 13 00:09:46.003990 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:09:46.006686 systemd-logind[1957]: Removed session 26. Sep 13 00:09:51.027891 systemd[1]: Started sshd@26-172.31.16.22:22-139.178.89.65:48206.service - OpenSSH per-connection server daemon (139.178.89.65:48206). Sep 13 00:09:51.330128 sshd[7039]: Accepted publickey for core from 139.178.89.65 port 48206 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:51.331548 sshd[7039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:51.340576 systemd-logind[1957]: New session 27 of user core. Sep 13 00:09:51.346775 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:09:52.007456 sshd[7039]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:52.017565 systemd[1]: sshd@26-172.31.16.22:22-139.178.89.65:48206.service: Deactivated successfully. Sep 13 00:09:52.020979 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:09:52.022215 systemd-logind[1957]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:09:52.024519 systemd-logind[1957]: Removed session 27. Sep 13 00:09:57.049870 systemd[1]: Started sshd@27-172.31.16.22:22-139.178.89.65:48212.service - OpenSSH per-connection server daemon (139.178.89.65:48212). Sep 13 00:09:57.251702 sshd[7055]: Accepted publickey for core from 139.178.89.65 port 48212 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:09:57.255797 sshd[7055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:57.264123 systemd-logind[1957]: New session 28 of user core. Sep 13 00:09:57.273822 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:09:57.699144 sshd[7055]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:57.705331 systemd[1]: sshd@27-172.31.16.22:22-139.178.89.65:48212.service: Deactivated successfully. Sep 13 00:09:57.709300 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:09:57.712485 systemd-logind[1957]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:09:57.715784 systemd-logind[1957]: Removed session 28. Sep 13 00:10:02.047547 kernel: hrtimer: interrupt took 2282428 ns Sep 13 00:10:11.008588 systemd[1]: cri-containerd-ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009.scope: Deactivated successfully. Sep 13 00:10:11.008919 systemd[1]: cri-containerd-ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009.scope: Consumed 13.701s CPU time. Sep 13 00:10:11.335042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009-rootfs.mount: Deactivated successfully. Sep 13 00:10:11.393429 systemd[1]: cri-containerd-7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11.scope: Deactivated successfully. Sep 13 00:10:11.393770 systemd[1]: cri-containerd-7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11.scope: Consumed 4.326s CPU time, 32.3M memory peak, 0B memory swap peak. Sep 13 00:10:11.470645 containerd[1972]: time="2025-09-13T00:10:11.393825858Z" level=info msg="shim disconnected" id=ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009 namespace=k8s.io Sep 13 00:10:11.470645 containerd[1972]: time="2025-09-13T00:10:11.470500986Z" level=warning msg="cleaning up after shim disconnected" id=ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009 namespace=k8s.io Sep 13 00:10:11.470645 containerd[1972]: time="2025-09-13T00:10:11.470549540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:11.479037 containerd[1972]: time="2025-09-13T00:10:11.478973661Z" level=info msg="shim disconnected" id=7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11 namespace=k8s.io Sep 13 00:10:11.479217 containerd[1972]: time="2025-09-13T00:10:11.479193066Z" level=warning msg="cleaning up after shim disconnected" id=7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11 namespace=k8s.io Sep 13 00:10:11.479297 containerd[1972]: time="2025-09-13T00:10:11.479283169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:11.480052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11-rootfs.mount: Deactivated successfully. Sep 13 00:10:12.192450 kubelet[3206]: I0913 00:10:12.192371 3206 scope.go:117] "RemoveContainer" containerID="ee84f7de54d3441066e6904e0da9cbd7a2f4ad07d46a713e08c8fda074678009" Sep 13 00:10:12.197025 kubelet[3206]: I0913 00:10:12.195836 3206 scope.go:117] "RemoveContainer" containerID="7abd8d6ca6b6af9c2e6be10718bdd17333c6af1269be2812664d7040c19cef11" Sep 13 00:10:12.280883 containerd[1972]: time="2025-09-13T00:10:12.280656572Z" level=info msg="CreateContainer within sandbox \"7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:10:12.282195 containerd[1972]: time="2025-09-13T00:10:12.280656701Z" level=info msg="CreateContainer within sandbox \"d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:10:12.412956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249750538.mount: Deactivated successfully. Sep 13 00:10:12.431681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445052376.mount: Deactivated successfully. Sep 13 00:10:12.455712 containerd[1972]: time="2025-09-13T00:10:12.455512916Z" level=info msg="CreateContainer within sandbox \"d8a8a3aefdcc4d5835aab7ba933fd535ffa39594bbc8dfb0768179f50a86f761\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fa99752a2679f5d41b2bf2b1f9643526845ea7f5e81bbf1cd8ade6a5fd1bf3e8\"" Sep 13 00:10:12.459453 containerd[1972]: time="2025-09-13T00:10:12.458309927Z" level=info msg="StartContainer for \"fa99752a2679f5d41b2bf2b1f9643526845ea7f5e81bbf1cd8ade6a5fd1bf3e8\"" Sep 13 00:10:12.476417 containerd[1972]: time="2025-09-13T00:10:12.474972965Z" level=info msg="CreateContainer within sandbox \"7491d70d4eae60bb2709fe132d74d23f6def0e123495235fcfa913c683396245\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a2e43d01ef7151e00c50b9a53a1042a23832557b5cf230744616d4c95f359dd7\"" Sep 13 00:10:12.477534 containerd[1972]: time="2025-09-13T00:10:12.477497479Z" level=info msg="StartContainer for \"a2e43d01ef7151e00c50b9a53a1042a23832557b5cf230744616d4c95f359dd7\"" Sep 13 00:10:12.534166 systemd[1]: Started cri-containerd-a2e43d01ef7151e00c50b9a53a1042a23832557b5cf230744616d4c95f359dd7.scope - libcontainer container a2e43d01ef7151e00c50b9a53a1042a23832557b5cf230744616d4c95f359dd7. Sep 13 00:10:12.545676 systemd[1]: Started cri-containerd-fa99752a2679f5d41b2bf2b1f9643526845ea7f5e81bbf1cd8ade6a5fd1bf3e8.scope - libcontainer container fa99752a2679f5d41b2bf2b1f9643526845ea7f5e81bbf1cd8ade6a5fd1bf3e8. Sep 13 00:10:12.624183 containerd[1972]: time="2025-09-13T00:10:12.623991988Z" level=info msg="StartContainer for \"fa99752a2679f5d41b2bf2b1f9643526845ea7f5e81bbf1cd8ade6a5fd1bf3e8\" returns successfully" Sep 13 00:10:12.624591 containerd[1972]: time="2025-09-13T00:10:12.624138576Z" level=info msg="StartContainer for \"a2e43d01ef7151e00c50b9a53a1042a23832557b5cf230744616d4c95f359dd7\" returns successfully" Sep 13 00:10:14.381659 kubelet[3206]: E0913 00:10:14.381579 3206 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-22?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 13 00:10:16.102852 systemd[1]: cri-containerd-3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3.scope: Deactivated successfully. Sep 13 00:10:16.103088 systemd[1]: cri-containerd-3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3.scope: Consumed 2.664s CPU time, 21.4M memory peak, 0B memory swap peak. Sep 13 00:10:16.194096 containerd[1972]: time="2025-09-13T00:10:16.193657188Z" level=info msg="shim disconnected" id=3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3 namespace=k8s.io Sep 13 00:10:16.194096 containerd[1972]: time="2025-09-13T00:10:16.193745079Z" level=warning msg="cleaning up after shim disconnected" id=3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3 namespace=k8s.io Sep 13 00:10:16.194096 containerd[1972]: time="2025-09-13T00:10:16.193757494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:16.197656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3-rootfs.mount: Deactivated successfully. Sep 13 00:10:16.244156 containerd[1972]: time="2025-09-13T00:10:16.244079138Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:10:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:10:17.229787 kubelet[3206]: I0913 00:10:17.229668 3206 scope.go:117] "RemoveContainer" containerID="3b111e1f4e9c788f1c34887a70cda71fe989b2f4ed64f7aff86993d16b12b7a3" Sep 13 00:10:17.231867 containerd[1972]: time="2025-09-13T00:10:17.231829939Z" level=info msg="CreateContainer within sandbox \"51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:10:17.274581 containerd[1972]: time="2025-09-13T00:10:17.274526037Z" level=info msg="CreateContainer within sandbox \"51f33658bcc29fcb892082173953f1575d4e1dee0d9ba7063e5a5fa688d5399d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f963b7ed4ec3eeed5742e201638d47313cbf2b5ea6de9ca7c210279c9de9f126\"" Sep 13 00:10:17.275158 containerd[1972]: time="2025-09-13T00:10:17.275120638Z" level=info msg="StartContainer for \"f963b7ed4ec3eeed5742e201638d47313cbf2b5ea6de9ca7c210279c9de9f126\"" Sep 13 00:10:17.310725 systemd[1]: run-containerd-runc-k8s.io-f963b7ed4ec3eeed5742e201638d47313cbf2b5ea6de9ca7c210279c9de9f126-runc.E6staW.mount: Deactivated successfully. Sep 13 00:10:17.320729 systemd[1]: Started cri-containerd-f963b7ed4ec3eeed5742e201638d47313cbf2b5ea6de9ca7c210279c9de9f126.scope - libcontainer container f963b7ed4ec3eeed5742e201638d47313cbf2b5ea6de9ca7c210279c9de9f126. Sep 13 00:10:17.381498 containerd[1972]: time="2025-09-13T00:10:17.381370651Z" level=info msg="StartContainer for \"f963b7ed4ec3eeed5742e201638d47313cbf2b5ea6de9ca7c210279c9de9f126\" returns successfully"