Sep 12 17:43:10.974855 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:43:10.974893 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:43:10.974912 kernel: BIOS-provided physical RAM map: Sep 12 17:43:10.975516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:43:10.975538 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:43:10.975550 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 12 17:43:10.975564 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 12 17:43:10.975577 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:43:10.975590 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:43:10.975607 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:43:10.975619 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:43:10.975631 kernel: NX (Execute Disable) protection: active Sep 12 17:43:10.975643 kernel: APIC: Static calls initialized Sep 12 17:43:10.975656 kernel: efi: EFI v2.7 by EDK II Sep 12 17:43:10.975672 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:43:10.975689 kernel: SMBIOS 2.7 present. Sep 12 17:43:10.975702 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:43:10.975716 kernel: Hypervisor detected: KVM Sep 12 17:43:10.975729 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:43:10.975743 kernel: kvm-clock: using sched offset of 3657684321 cycles Sep 12 17:43:10.975758 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:43:10.975772 kernel: tsc: Detected 2499.998 MHz processor Sep 12 17:43:10.975786 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:43:10.975800 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:43:10.975814 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:43:10.975831 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:43:10.975843 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:43:10.975854 kernel: Using GB pages for direct mapping Sep 12 17:43:10.975866 kernel: Secure boot disabled Sep 12 17:43:10.975879 kernel: ACPI: Early table checksum verification disabled Sep 12 17:43:10.975890 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:43:10.975903 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:43:10.975916 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:43:10.975942 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:43:10.975958 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:43:10.975970 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:43:10.975982 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:43:10.975995 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:43:10.976006 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:43:10.976019 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:43:10.976036 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:43:10.976052 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:43:10.976065 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:43:10.976078 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:43:10.976092 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:43:10.976104 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:43:10.976117 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:43:10.976130 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:43:10.976147 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:43:10.976160 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:43:10.976172 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:43:10.976185 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:43:10.976198 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:43:10.976211 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:43:10.976223 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:43:10.976237 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:43:10.976250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:43:10.976266 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:43:10.976278 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 12 17:43:10.976290 kernel: Zone ranges: Sep 12 17:43:10.976302 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:43:10.976316 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:43:10.976329 kernel: Normal empty Sep 12 17:43:10.976342 kernel: Movable zone start for each node Sep 12 17:43:10.976354 kernel: Early memory node ranges Sep 12 17:43:10.976368 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:43:10.976383 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:43:10.976396 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:43:10.976409 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:43:10.976421 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:43:10.976435 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:43:10.976448 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:43:10.976461 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:43:10.976475 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:43:10.976487 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:43:10.976505 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:43:10.976518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:43:10.976531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:43:10.976545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:43:10.976558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:43:10.976570 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:43:10.976583 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:43:10.976596 kernel: TSC deadline timer available Sep 12 17:43:10.976609 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:43:10.976623 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:43:10.976639 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:43:10.976652 kernel: Booting paravirtualized kernel on KVM Sep 12 17:43:10.976665 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:43:10.976679 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:43:10.976692 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:43:10.976706 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:43:10.976718 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:43:10.976731 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:43:10.976744 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:43:10.976760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:43:10.976774 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:43:10.976787 kernel: random: crng init done Sep 12 17:43:10.976800 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:43:10.976813 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:43:10.976826 kernel: Fallback order for Node 0: 0 Sep 12 17:43:10.976842 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 12 17:43:10.976858 kernel: Policy zone: DMA32 Sep 12 17:43:10.976871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:43:10.976885 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162940K reserved, 0K cma-reserved) Sep 12 17:43:10.976898 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:43:10.976911 kernel: Kernel/User page tables isolation: enabled Sep 12 17:43:10.976944 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:43:10.976957 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:43:10.976970 kernel: Dynamic Preempt: voluntary Sep 12 17:43:10.976984 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:43:10.977002 kernel: rcu: RCU event tracing is enabled. Sep 12 17:43:10.977014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:43:10.977027 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:43:10.977040 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:43:10.977053 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:43:10.977067 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:43:10.977080 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:43:10.977094 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:43:10.977121 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:43:10.977135 kernel: Console: colour dummy device 80x25 Sep 12 17:43:10.977148 kernel: printk: console [tty0] enabled Sep 12 17:43:10.977162 kernel: printk: console [ttyS0] enabled Sep 12 17:43:10.977180 kernel: ACPI: Core revision 20230628 Sep 12 17:43:10.977194 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:43:10.977209 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:43:10.977222 kernel: x2apic enabled Sep 12 17:43:10.977237 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:43:10.977251 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 17:43:10.977267 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 12 17:43:10.977281 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:43:10.977295 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:43:10.977309 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:43:10.977323 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:43:10.977337 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:43:10.977351 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:43:10.977365 kernel: RETBleed: Vulnerable Sep 12 17:43:10.977379 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:43:10.977395 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:43:10.977408 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:43:10.977422 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:43:10.977436 kernel: active return thunk: its_return_thunk Sep 12 17:43:10.977449 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:43:10.977463 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:43:10.977487 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:43:10.977501 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:43:10.977515 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:43:10.977529 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:43:10.977543 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:43:10.977560 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:43:10.977574 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:43:10.977588 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:43:10.977602 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:43:10.977616 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:43:10.977631 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:43:10.977644 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:43:10.977658 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:43:10.977672 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:43:10.977687 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:43:10.977701 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:43:10.977717 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:43:10.977732 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:43:10.977745 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:43:10.977760 kernel: landlock: Up and running. Sep 12 17:43:10.977774 kernel: SELinux: Initializing. Sep 12 17:43:10.977788 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:43:10.977802 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:43:10.977816 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:43:10.977830 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:10.977844 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:10.977859 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:10.977877 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:43:10.977892 kernel: signal: max sigframe size: 3632 Sep 12 17:43:10.977905 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:43:10.977920 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:43:10.977944 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:43:10.977958 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:43:10.977993 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:43:10.978009 kernel: .... node #0, CPUs: #1 Sep 12 17:43:10.978026 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:43:10.978048 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:43:10.978065 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:43:10.978079 kernel: smpboot: Max logical packages: 1 Sep 12 17:43:10.978094 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 12 17:43:10.978111 kernel: devtmpfs: initialized Sep 12 17:43:10.978127 kernel: x86/mm: Memory block size: 128MB Sep 12 17:43:10.978143 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:43:10.978160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:43:10.978180 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:43:10.978196 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:43:10.978213 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:43:10.978230 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:43:10.978245 kernel: audit: type=2000 audit(1757698991.377:1): state=initialized audit_enabled=0 res=1 Sep 12 17:43:10.978261 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:43:10.978278 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:43:10.978295 kernel: cpuidle: using governor menu Sep 12 17:43:10.978311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:43:10.978331 kernel: dca service started, version 1.12.1 Sep 12 17:43:10.978348 kernel: PCI: Using configuration type 1 for base access Sep 12 17:43:10.978364 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:43:10.978381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:43:10.978398 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:43:10.978414 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:43:10.978431 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:43:10.978448 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:43:10.978464 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:43:10.978484 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:43:10.978500 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:43:10.978516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:43:10.978532 kernel: ACPI: Interpreter enabled Sep 12 17:43:10.978549 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:43:10.978565 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:43:10.978582 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:43:10.978599 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:43:10.978615 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:43:10.978632 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:43:10.978873 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:43:10.979064 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:43:10.979207 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:43:10.979228 kernel: acpiphp: Slot [3] registered Sep 12 17:43:10.979246 kernel: acpiphp: Slot [4] registered Sep 12 17:43:10.979262 kernel: acpiphp: Slot [5] registered Sep 12 17:43:10.979279 kernel: acpiphp: Slot [6] registered Sep 12 17:43:10.979301 kernel: acpiphp: Slot [7] registered Sep 12 17:43:10.979318 kernel: acpiphp: Slot [8] registered Sep 12 17:43:10.979334 kernel: acpiphp: Slot [9] registered Sep 12 17:43:10.979351 kernel: acpiphp: Slot [10] registered Sep 12 17:43:10.979368 kernel: acpiphp: Slot [11] registered Sep 12 17:43:10.979385 kernel: acpiphp: Slot [12] registered Sep 12 17:43:10.979402 kernel: acpiphp: Slot [13] registered Sep 12 17:43:10.979418 kernel: acpiphp: Slot [14] registered Sep 12 17:43:10.979435 kernel: acpiphp: Slot [15] registered Sep 12 17:43:10.979455 kernel: acpiphp: Slot [16] registered Sep 12 17:43:10.979471 kernel: acpiphp: Slot [17] registered Sep 12 17:43:10.979488 kernel: acpiphp: Slot [18] registered Sep 12 17:43:10.979504 kernel: acpiphp: Slot [19] registered Sep 12 17:43:10.979521 kernel: acpiphp: Slot [20] registered Sep 12 17:43:10.979537 kernel: acpiphp: Slot [21] registered Sep 12 17:43:10.979554 kernel: acpiphp: Slot [22] registered Sep 12 17:43:10.979571 kernel: acpiphp: Slot [23] registered Sep 12 17:43:10.979587 kernel: acpiphp: Slot [24] registered Sep 12 17:43:10.979604 kernel: acpiphp: Slot [25] registered Sep 12 17:43:10.979624 kernel: acpiphp: Slot [26] registered Sep 12 17:43:10.979641 kernel: acpiphp: Slot [27] registered Sep 12 17:43:10.979657 kernel: acpiphp: Slot [28] registered Sep 12 17:43:10.979674 kernel: acpiphp: Slot [29] registered Sep 12 17:43:10.979691 kernel: acpiphp: Slot [30] registered Sep 12 17:43:10.979707 kernel: acpiphp: Slot [31] registered Sep 12 17:43:10.979724 kernel: PCI host bridge to bus 0000:00 Sep 12 17:43:10.979865 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:43:10.980027 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:43:10.980155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:43:10.980283 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:43:10.980408 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:43:10.980533 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:43:10.980705 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:43:10.980856 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:43:10.982145 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 12 17:43:10.982315 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:43:10.982459 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:43:10.982600 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:43:10.982739 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:43:10.982876 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:43:10.984077 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:43:10.984237 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:43:10.984389 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 12 17:43:10.984529 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 12 17:43:10.984668 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:43:10.984806 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 12 17:43:10.985983 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:43:10.986178 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:43:10.986331 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 12 17:43:10.986482 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:43:10.986624 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 12 17:43:10.986644 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:43:10.986657 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:43:10.986672 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:43:10.986686 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:43:10.986707 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:43:10.986721 kernel: iommu: Default domain type: Translated Sep 12 17:43:10.986736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:43:10.986749 kernel: efivars: Registered efivars operations Sep 12 17:43:10.986763 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:43:10.986778 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:43:10.986792 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:43:10.986806 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:43:10.988000 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:43:10.988169 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:43:10.988313 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:43:10.988335 kernel: vgaarb: loaded Sep 12 17:43:10.988352 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:43:10.988368 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:43:10.988383 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:43:10.988399 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:43:10.988416 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:43:10.988436 kernel: pnp: PnP ACPI init Sep 12 17:43:10.988452 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:43:10.988468 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:43:10.988484 kernel: NET: Registered PF_INET protocol family Sep 12 17:43:10.988500 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:43:10.988516 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:43:10.988532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:43:10.988548 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:43:10.988564 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:43:10.988584 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:43:10.988600 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:43:10.988614 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:43:10.988627 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:43:10.988640 kernel: NET: Registered PF_XDP protocol family Sep 12 17:43:10.988782 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:43:10.988905 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:43:10.990158 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:43:10.990292 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:43:10.990410 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:43:10.990559 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:43:10.990579 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:43:10.990596 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:43:10.990611 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 17:43:10.990625 kernel: clocksource: Switched to clocksource tsc Sep 12 17:43:10.990639 kernel: Initialise system trusted keyrings Sep 12 17:43:10.990654 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:43:10.990673 kernel: Key type asymmetric registered Sep 12 17:43:10.990687 kernel: Asymmetric key parser 'x509' registered Sep 12 17:43:10.990702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:43:10.990718 kernel: io scheduler mq-deadline registered Sep 12 17:43:10.990733 kernel: io scheduler kyber registered Sep 12 17:43:10.990748 kernel: io scheduler bfq registered Sep 12 17:43:10.990763 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:43:10.990778 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:43:10.990794 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:43:10.990814 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:43:10.990830 kernel: i8042: Warning: Keylock active Sep 12 17:43:10.990846 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:43:10.990862 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:43:10.992144 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:43:10.992276 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:43:10.992394 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:43:10 UTC (1757698990) Sep 12 17:43:10.992510 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:43:10.992534 kernel: intel_pstate: CPU model not supported Sep 12 17:43:10.992549 kernel: efifb: probing for efifb Sep 12 17:43:10.992565 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 12 17:43:10.992580 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:43:10.992595 kernel: efifb: scrolling: redraw Sep 12 17:43:10.992611 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:43:10.992626 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:43:10.992642 kernel: fb0: EFI VGA frame buffer device Sep 12 17:43:10.992657 kernel: pstore: Using crash dump compression: deflate Sep 12 17:43:10.992675 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:43:10.992691 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:43:10.992706 kernel: Segment Routing with IPv6 Sep 12 17:43:10.992722 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:43:10.992737 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:43:10.992752 kernel: Key type dns_resolver registered Sep 12 17:43:10.992793 kernel: IPI shorthand broadcast: enabled Sep 12 17:43:10.992816 kernel: sched_clock: Marking stable (510001969, 167560830)->(805525229, -127962430) Sep 12 17:43:10.992834 kernel: registered taskstats version 1 Sep 12 17:43:10.992867 kernel: Loading compiled-in X.509 certificates Sep 12 17:43:10.992890 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:43:10.992907 kernel: Key type .fscrypt registered Sep 12 17:43:10.993946 kernel: Key type fscrypt-provisioning registered Sep 12 17:43:10.993972 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:43:10.993991 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:43:10.994009 kernel: ima: No architecture policies found Sep 12 17:43:10.994027 kernel: clk: Disabling unused clocks Sep 12 17:43:10.994050 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:43:10.994067 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:43:10.994084 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:43:10.994102 kernel: Run /init as init process Sep 12 17:43:10.994119 kernel: with arguments: Sep 12 17:43:10.994136 kernel: /init Sep 12 17:43:10.994153 kernel: with environment: Sep 12 17:43:10.994169 kernel: HOME=/ Sep 12 17:43:10.994186 kernel: TERM=linux Sep 12 17:43:10.994203 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:43:10.994227 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:43:10.994248 systemd[1]: Detected virtualization amazon. Sep 12 17:43:10.994266 systemd[1]: Detected architecture x86-64. Sep 12 17:43:10.994284 systemd[1]: Running in initrd. Sep 12 17:43:10.994301 systemd[1]: No hostname configured, using default hostname. Sep 12 17:43:10.994318 systemd[1]: Hostname set to . Sep 12 17:43:10.994339 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:43:10.994357 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:43:10.994375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:10.994392 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:10.994412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:43:10.994430 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:43:10.994448 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:43:10.994470 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:43:10.994490 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:43:10.994508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:43:10.994527 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:10.994545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:10.994566 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:43:10.994585 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:43:10.994603 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:43:10.994621 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:43:10.994639 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:10.994656 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:10.994672 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:43:10.994687 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:43:10.994701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:10.994719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:10.994736 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:10.994753 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:43:10.994769 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:43:10.994787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:43:10.994804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:43:10.994821 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:43:10.994839 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:43:10.994860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:43:10.994878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:10.994895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:10.996970 systemd-journald[178]: Collecting audit messages is disabled. Sep 12 17:43:10.997070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:10.997089 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:43:10.997110 systemd-journald[178]: Journal started Sep 12 17:43:10.997147 systemd-journald[178]: Runtime Journal (/run/log/journal/ec23e0bddce97f836d4d1f0fcf004715) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:43:10.997343 systemd-modules-load[179]: Inserted module 'overlay' Sep 12 17:43:11.009986 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:43:11.010058 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:43:11.017957 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:43:11.021990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:11.027890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:11.042957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:43:11.045856 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 12 17:43:11.046578 kernel: Bridge firewalling registered Sep 12 17:43:11.047230 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:11.048950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:43:11.054118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:43:11.055120 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:11.071319 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:43:11.074700 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:11.078000 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:43:11.089419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:11.093253 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:11.100105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:11.110250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:43:11.117781 dracut-cmdline[209]: dracut-dracut-053 Sep 12 17:43:11.122037 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:43:11.161088 systemd-resolved[215]: Positive Trust Anchors: Sep 12 17:43:11.161111 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:43:11.161170 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:43:11.169416 systemd-resolved[215]: Defaulting to hostname 'linux'. Sep 12 17:43:11.172687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:43:11.173400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:11.212972 kernel: SCSI subsystem initialized Sep 12 17:43:11.222962 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:43:11.234956 kernel: iscsi: registered transport (tcp) Sep 12 17:43:11.257184 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:43:11.257273 kernel: QLogic iSCSI HBA Driver Sep 12 17:43:11.303682 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:11.309139 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:43:11.337141 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:43:11.337220 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:43:11.340368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:43:11.381978 kernel: raid6: avx512x4 gen() 18125 MB/s Sep 12 17:43:11.399989 kernel: raid6: avx512x2 gen() 17408 MB/s Sep 12 17:43:11.418446 kernel: raid6: avx512x1 gen() 16998 MB/s Sep 12 17:43:11.435989 kernel: raid6: avx2x4 gen() 17052 MB/s Sep 12 17:43:11.453982 kernel: raid6: avx2x2 gen() 16426 MB/s Sep 12 17:43:11.472532 kernel: raid6: avx2x1 gen() 13523 MB/s Sep 12 17:43:11.472606 kernel: raid6: using algorithm avx512x4 gen() 18125 MB/s Sep 12 17:43:11.491522 kernel: raid6: .... xor() 7509 MB/s, rmw enabled Sep 12 17:43:11.491598 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:43:11.514972 kernel: xor: automatically using best checksumming function avx Sep 12 17:43:11.675971 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:43:11.686949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:11.694139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:11.707821 systemd-udevd[397]: Using default interface naming scheme 'v255'. Sep 12 17:43:11.712948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:11.723208 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:43:11.740661 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 12 17:43:11.771829 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:11.776133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:43:11.832719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:11.847073 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:43:11.865565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:11.868245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:11.870620 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:11.871857 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:43:11.878182 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:43:11.906765 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:11.929322 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:43:11.944659 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:43:11.944921 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:43:11.954943 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:43:11.970409 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:11.970676 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:11.972245 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:11.972826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:11.973137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:11.973765 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:11.985989 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c6:cf:3f:9e:bd Sep 12 17:43:11.988448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:11.996645 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:43:11.996690 kernel: AES CTR mode by8 optimization enabled Sep 12 17:43:11.992683 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:12.002505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:12.004069 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:12.015159 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:43:12.015412 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:43:12.024483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:12.036950 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:43:12.046255 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:43:12.046333 kernel: GPT:9289727 != 16777215 Sep 12 17:43:12.046353 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:43:12.046371 kernel: GPT:9289727 != 16777215 Sep 12 17:43:12.046388 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:43:12.046407 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:43:12.055279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:12.061176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:12.102071 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:12.123247 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (455) Sep 12 17:43:12.132958 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Sep 12 17:43:12.217650 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:43:12.224426 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:43:12.231148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:43:12.237153 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:43:12.237804 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:43:12.244145 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:43:12.254023 disk-uuid[630]: Primary Header is updated. Sep 12 17:43:12.254023 disk-uuid[630]: Secondary Entries is updated. Sep 12 17:43:12.254023 disk-uuid[630]: Secondary Header is updated. Sep 12 17:43:12.260960 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:43:12.267978 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:43:12.272953 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:43:13.283204 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:43:13.283286 disk-uuid[631]: The operation has completed successfully. Sep 12 17:43:13.434889 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:43:13.435082 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:43:13.457153 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:43:13.462704 sh[974]: Success Sep 12 17:43:13.481947 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:43:13.596853 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:43:13.606081 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:43:13.609606 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:43:13.658381 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:43:13.658462 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:43:13.658499 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:43:13.663035 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:43:13.663111 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:43:13.684044 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:43:13.701345 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:43:13.702588 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:43:13.713243 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:43:13.718158 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:43:13.748367 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:43:13.748443 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:43:13.748465 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:43:13.757092 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:43:13.768661 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:43:13.773272 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:43:13.781209 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:43:13.790193 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:43:13.833538 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:13.844164 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:43:13.888915 systemd-networkd[1166]: lo: Link UP Sep 12 17:43:13.888945 systemd-networkd[1166]: lo: Gained carrier Sep 12 17:43:13.890747 systemd-networkd[1166]: Enumeration completed Sep 12 17:43:13.891279 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:13.891284 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:43:13.891758 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:43:13.893532 systemd[1]: Reached target network.target - Network. Sep 12 17:43:13.900903 systemd-networkd[1166]: eth0: Link UP Sep 12 17:43:13.900911 systemd-networkd[1166]: eth0: Gained carrier Sep 12 17:43:13.900941 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:13.917126 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.28.238/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:43:13.957166 ignition[1115]: Ignition 2.19.0 Sep 12 17:43:13.957177 ignition[1115]: Stage: fetch-offline Sep 12 17:43:13.957366 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:13.957386 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:13.957731 ignition[1115]: Ignition finished successfully Sep 12 17:43:13.960906 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:13.966149 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:43:13.990694 ignition[1178]: Ignition 2.19.0 Sep 12 17:43:13.990710 ignition[1178]: Stage: fetch Sep 12 17:43:13.991908 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:13.991940 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:13.992079 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:14.031208 ignition[1178]: PUT result: OK Sep 12 17:43:14.035693 ignition[1178]: parsed url from cmdline: "" Sep 12 17:43:14.035703 ignition[1178]: no config URL provided Sep 12 17:43:14.035711 ignition[1178]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:43:14.035725 ignition[1178]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:43:14.035747 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:14.036796 ignition[1178]: PUT result: OK Sep 12 17:43:14.036842 ignition[1178]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:43:14.037797 ignition[1178]: GET result: OK Sep 12 17:43:14.037913 ignition[1178]: parsing config with SHA512: 93d02b4a30a8977ccb104fff4a6718ed6dfe885768a15493aa47fa23f2e8e1e3ebe7e7938ab6b1e3ccb8181d7168424dbb294902ac88fca071eef7709bf22eea Sep 12 17:43:14.041958 unknown[1178]: fetched base config from "system" Sep 12 17:43:14.041970 unknown[1178]: fetched base config from "system" Sep 12 17:43:14.042370 ignition[1178]: fetch: fetch complete Sep 12 17:43:14.041976 unknown[1178]: fetched user config from "aws" Sep 12 17:43:14.042375 ignition[1178]: fetch: fetch passed Sep 12 17:43:14.042427 ignition[1178]: Ignition finished successfully Sep 12 17:43:14.044897 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:43:14.051149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:43:14.068448 ignition[1184]: Ignition 2.19.0 Sep 12 17:43:14.068462 ignition[1184]: Stage: kargs Sep 12 17:43:14.068917 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:14.068961 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:14.069085 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:14.070000 ignition[1184]: PUT result: OK Sep 12 17:43:14.072820 ignition[1184]: kargs: kargs passed Sep 12 17:43:14.072898 ignition[1184]: Ignition finished successfully Sep 12 17:43:14.074987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:43:14.079188 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:43:14.099708 ignition[1190]: Ignition 2.19.0 Sep 12 17:43:14.099723 ignition[1190]: Stage: disks Sep 12 17:43:14.100232 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:14.100246 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:14.100369 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:14.101226 ignition[1190]: PUT result: OK Sep 12 17:43:14.104178 ignition[1190]: disks: disks passed Sep 12 17:43:14.104256 ignition[1190]: Ignition finished successfully Sep 12 17:43:14.105746 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:43:14.106794 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:14.107356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:43:14.108077 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:43:14.108617 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:43:14.109186 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:43:14.116180 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:43:14.155623 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:43:14.158462 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:43:14.165054 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:43:14.270946 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:43:14.271164 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:43:14.272140 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:43:14.286090 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:43:14.290060 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:43:14.291978 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:43:14.293186 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:43:14.293229 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:14.300052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:43:14.307553 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:43:14.311975 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1217) Sep 12 17:43:14.317984 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:43:14.319264 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:43:14.319285 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:43:14.323947 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:43:14.327694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:43:14.386317 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:43:14.391100 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:43:14.398642 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:43:14.403693 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:43:14.507655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:14.512039 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:43:14.515125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:43:14.528044 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:43:14.564328 ignition[1329]: INFO : Ignition 2.19.0 Sep 12 17:43:14.564328 ignition[1329]: INFO : Stage: mount Sep 12 17:43:14.564328 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:14.564328 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:14.564328 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:14.569056 ignition[1329]: INFO : PUT result: OK Sep 12 17:43:14.570407 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:43:14.571804 ignition[1329]: INFO : mount: mount passed Sep 12 17:43:14.571804 ignition[1329]: INFO : Ignition finished successfully Sep 12 17:43:14.573449 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:43:14.584104 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:43:14.653025 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:43:14.658151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:43:14.684998 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1341) Sep 12 17:43:14.688218 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:43:14.688288 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:43:14.689971 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:43:14.696993 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:43:14.700065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:43:14.725120 ignition[1358]: INFO : Ignition 2.19.0 Sep 12 17:43:14.726603 ignition[1358]: INFO : Stage: files Sep 12 17:43:14.726603 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:14.726603 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:14.726603 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:14.728269 ignition[1358]: INFO : PUT result: OK Sep 12 17:43:14.732692 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:43:14.733659 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:43:14.733659 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:43:14.738826 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:43:14.740033 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:43:14.741107 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:43:14.740490 unknown[1358]: wrote ssh authorized keys file for user: core Sep 12 17:43:14.744159 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:43:14.744159 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:43:14.828859 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:43:15.031362 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:43:15.033104 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:43:15.202102 systemd-networkd[1166]: eth0: Gained IPv6LL Sep 12 17:43:15.401417 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:43:15.875549 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:43:15.875549 ignition[1358]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:15.879906 ignition[1358]: INFO : files: files passed Sep 12 17:43:15.879906 ignition[1358]: INFO : Ignition finished successfully Sep 12 17:43:15.882606 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:43:15.892218 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:43:15.895360 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:43:15.899389 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:43:15.899522 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:43:15.917783 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:15.917783 initrd-setup-root-after-ignition[1387]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:15.920833 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:15.923323 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:15.924047 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:43:15.930623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:43:15.958083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:43:15.958218 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:43:15.959849 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:43:15.960681 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:43:15.961791 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:43:15.968181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:43:15.982008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:15.989295 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:43:16.001746 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:16.002533 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:16.003542 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:43:16.004431 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:43:16.004663 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:16.005961 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:43:16.006849 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:43:16.007639 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:43:16.008429 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:16.009188 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:16.010106 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:43:16.010857 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:16.011654 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:43:16.012804 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:43:16.013745 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:43:16.014409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:43:16.014593 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:16.015682 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:16.016478 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:16.017170 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:43:16.017586 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:16.018107 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:43:16.018287 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:16.019685 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:43:16.019878 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:16.020601 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:43:16.020782 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:43:16.028589 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:43:16.032260 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:43:16.033886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:43:16.034181 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:16.036388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:43:16.036608 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:16.048661 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:43:16.048801 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:43:16.053776 ignition[1411]: INFO : Ignition 2.19.0 Sep 12 17:43:16.053776 ignition[1411]: INFO : Stage: umount Sep 12 17:43:16.053776 ignition[1411]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:16.053776 ignition[1411]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:43:16.053776 ignition[1411]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:43:16.057151 ignition[1411]: INFO : PUT result: OK Sep 12 17:43:16.059054 ignition[1411]: INFO : umount: umount passed Sep 12 17:43:16.059054 ignition[1411]: INFO : Ignition finished successfully Sep 12 17:43:16.062870 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:43:16.063022 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:43:16.063863 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:43:16.063963 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:43:16.066485 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:43:16.066553 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:43:16.067073 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:43:16.067130 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:43:16.067584 systemd[1]: Stopped target network.target - Network. Sep 12 17:43:16.069496 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:43:16.069557 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:16.070056 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:43:16.072304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:43:16.079026 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:16.079473 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:43:16.079834 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:43:16.080181 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:43:16.080239 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:16.080561 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:43:16.080596 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:16.080889 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:43:16.083005 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:43:16.083603 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:43:16.083655 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:16.084431 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:43:16.085232 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:43:16.086918 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:43:16.088988 systemd-networkd[1166]: eth0: DHCPv6 lease lost Sep 12 17:43:16.090905 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:43:16.091086 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:43:16.094002 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:43:16.094176 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:43:16.097306 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:43:16.097378 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:16.103076 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:43:16.103751 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:43:16.103863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:16.104464 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:43:16.104527 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:16.107074 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:43:16.107141 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:16.107559 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:43:16.107602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:16.108063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:16.126749 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:43:16.126918 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:16.128609 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:43:16.128672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:16.130204 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:43:16.130251 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:16.131245 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:43:16.131303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:16.132422 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:43:16.132477 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:16.133947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:16.134004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:16.141166 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:43:16.142686 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:43:16.142759 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:16.143241 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:43:16.143283 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:16.143909 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:43:16.143969 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:16.144340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:16.144376 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:16.147300 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:43:16.147392 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:43:16.149386 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:43:16.149498 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:43:16.219372 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:43:16.219518 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:43:16.220736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:43:16.221350 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:43:16.221427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:16.227224 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:43:16.236863 systemd[1]: Switching root. Sep 12 17:43:16.272897 systemd-journald[178]: Journal stopped Sep 12 17:43:17.548703 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 12 17:43:17.548804 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:43:17.548833 kernel: SELinux: policy capability open_perms=1 Sep 12 17:43:17.548858 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:43:17.548878 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:43:17.548897 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:43:17.548918 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:43:17.548960 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:43:17.548991 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:43:17.549012 kernel: audit: type=1403 audit(1757698996.487:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:43:17.549033 systemd[1]: Successfully loaded SELinux policy in 45.886ms. Sep 12 17:43:17.549070 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.802ms. Sep 12 17:43:17.549097 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:43:17.549117 systemd[1]: Detected virtualization amazon. Sep 12 17:43:17.549138 systemd[1]: Detected architecture x86-64. Sep 12 17:43:17.549157 systemd[1]: Detected first boot. Sep 12 17:43:17.549176 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:43:17.549196 zram_generator::config[1453]: No configuration found. Sep 12 17:43:17.549217 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:43:17.549237 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:43:17.549260 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:43:17.549280 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:43:17.549301 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:43:17.549321 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:43:17.549343 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:43:17.549362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:43:17.549384 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:43:17.549405 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:43:17.549424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:43:17.549448 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:43:17.549478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:17.549496 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:17.549514 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:43:17.549532 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:43:17.549551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:43:17.549570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:43:17.549590 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:43:17.549608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:17.549632 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:43:17.549651 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:43:17.549671 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:43:17.549691 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:43:17.549711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:17.549730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:43:17.549749 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:43:17.549770 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:43:17.549793 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:43:17.549812 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:43:17.549831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:17.549851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:17.549870 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:17.549889 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:43:17.549908 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:43:17.550010 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:43:17.550031 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:43:17.550061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:17.550081 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:43:17.550100 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:43:17.550117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:43:17.550140 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:43:17.550160 systemd[1]: Reached target machines.target - Containers. Sep 12 17:43:17.550180 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:43:17.550201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:17.550226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:43:17.550246 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:43:17.550266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:43:17.550285 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:43:17.550308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:43:17.550330 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:43:17.550352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:43:17.550374 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:43:17.550399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:43:17.550420 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:43:17.550442 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:43:17.550464 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:43:17.550487 kernel: loop: module loaded Sep 12 17:43:17.550510 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:43:17.550533 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:43:17.550558 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:43:17.550580 kernel: fuse: init (API version 7.39) Sep 12 17:43:17.550606 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:43:17.550630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:43:17.550653 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:43:17.550676 systemd[1]: Stopped verity-setup.service. Sep 12 17:43:17.550699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:17.550722 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:43:17.550744 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:43:17.550768 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:43:17.550791 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:43:17.550818 kernel: ACPI: bus type drm_connector registered Sep 12 17:43:17.550840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:43:17.550863 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:43:17.550885 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:43:17.550906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:17.550957 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:43:17.550977 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:43:17.551033 systemd-journald[1538]: Collecting audit messages is disabled. Sep 12 17:43:17.551074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:43:17.551094 systemd-journald[1538]: Journal started Sep 12 17:43:17.551139 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec23e0bddce97f836d4d1f0fcf004715) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:43:17.171552 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:43:17.193457 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:43:17.194099 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:43:17.553067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:43:17.554992 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:43:17.557490 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:43:17.557694 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:43:17.559023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:43:17.559234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:43:17.560108 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:43:17.560296 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:43:17.561576 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:43:17.561770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:43:17.562916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:17.564253 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:43:17.565447 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:43:17.585636 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:43:17.593089 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:43:17.603097 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:43:17.605059 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:43:17.605125 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:43:17.610198 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:43:17.618195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:43:17.629162 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:43:17.630582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:17.636146 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:43:17.641232 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:43:17.642481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:43:17.651155 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:43:17.651863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:43:17.653562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:43:17.659877 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:43:17.664051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:43:17.667778 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:43:17.669836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:43:17.671384 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:43:17.682524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:17.695206 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:43:17.698580 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec23e0bddce97f836d4d1f0fcf004715 is 106.522ms for 988 entries. Sep 12 17:43:17.698580 systemd-journald[1538]: System Journal (/var/log/journal/ec23e0bddce97f836d4d1f0fcf004715) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:43:17.828709 systemd-journald[1538]: Received client request to flush runtime journal. Sep 12 17:43:17.828786 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:43:17.751514 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:43:17.753379 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:43:17.762247 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:43:17.768827 udevadm[1588]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:43:17.795103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:17.829938 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. Sep 12 17:43:17.829962 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. Sep 12 17:43:17.830662 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:43:17.843010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:17.856837 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:43:17.866972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:43:17.870343 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:43:17.875253 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:43:17.905957 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 17:43:17.929915 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:43:17.943522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:43:17.975956 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:43:17.978620 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Sep 12 17:43:17.978651 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Sep 12 17:43:17.985795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:18.073993 kernel: loop3: detected capacity change from 0 to 61336 Sep 12 17:43:18.141965 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:43:18.183316 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:43:18.247008 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 17:43:18.293614 kernel: loop7: detected capacity change from 0 to 61336 Sep 12 17:43:18.318119 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:43:18.318824 (sd-merge)[1612]: Merged extensions into '/usr'. Sep 12 17:43:18.330099 systemd[1]: Reloading requested from client PID 1582 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:43:18.330556 systemd[1]: Reloading... Sep 12 17:43:18.472975 zram_generator::config[1641]: No configuration found. Sep 12 17:43:18.719012 ldconfig[1577]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:43:18.722680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:43:18.810939 systemd[1]: Reloading finished in 476 ms. Sep 12 17:43:18.841013 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:43:18.844592 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:43:18.855374 systemd[1]: Starting ensure-sysext.service... Sep 12 17:43:18.860190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:43:18.875257 systemd[1]: Reloading requested from client PID 1690 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:43:18.875274 systemd[1]: Reloading... Sep 12 17:43:18.907206 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:43:18.907653 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:43:18.908814 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:43:18.909741 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Sep 12 17:43:18.909916 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Sep 12 17:43:18.921239 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:43:18.921409 systemd-tmpfiles[1691]: Skipping /boot Sep 12 17:43:18.942881 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:43:18.943086 systemd-tmpfiles[1691]: Skipping /boot Sep 12 17:43:19.009949 zram_generator::config[1721]: No configuration found. Sep 12 17:43:19.137874 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:43:19.192910 systemd[1]: Reloading finished in 317 ms. Sep 12 17:43:19.212339 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:43:19.217691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:19.230180 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:43:19.233864 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:43:19.237192 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:43:19.251170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:43:19.258325 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:19.264083 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:43:19.280462 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:43:19.286156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:19.286462 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:19.297960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:43:19.302383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:43:19.316340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:43:19.317151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:19.317345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:19.324539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:43:19.325031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:43:19.329062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:19.330417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:19.330677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:19.330812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:43:19.332030 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:19.343239 systemd-udevd[1780]: Using default interface naming scheme 'v255'. Sep 12 17:43:19.348330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:43:19.349012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:43:19.352038 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:43:19.352371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:43:19.364872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:19.366895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:43:19.375634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:43:19.386636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:43:19.387755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:43:19.388222 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:43:19.388445 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:43:19.390172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:43:19.392951 systemd[1]: Finished ensure-sysext.service. Sep 12 17:43:19.396345 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:43:19.396543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:43:19.406835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:43:19.424822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:43:19.425080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:43:19.426854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:43:19.429948 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:43:19.436805 augenrules[1805]: No rules Sep 12 17:43:19.446174 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:43:19.447007 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:43:19.447993 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:43:19.467414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:19.479168 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:43:19.492958 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:43:19.494036 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:43:19.495098 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:43:19.594398 systemd-resolved[1776]: Positive Trust Anchors: Sep 12 17:43:19.594843 systemd-resolved[1776]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:43:19.594912 systemd-resolved[1776]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:43:19.604227 systemd-resolved[1776]: Defaulting to hostname 'linux'. Sep 12 17:43:19.607747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:43:19.608752 systemd-networkd[1820]: lo: Link UP Sep 12 17:43:19.608759 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:19.608762 systemd-networkd[1820]: lo: Gained carrier Sep 12 17:43:19.611047 systemd-networkd[1820]: Enumeration completed Sep 12 17:43:19.611177 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:43:19.613495 systemd[1]: Reached target network.target - Network. Sep 12 17:43:19.623570 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:43:19.651892 (udev-worker)[1827]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:19.662414 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:43:19.742276 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:19.742291 systemd-networkd[1820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:43:19.749776 systemd-networkd[1820]: eth0: Link UP Sep 12 17:43:19.751121 systemd-networkd[1820]: eth0: Gained carrier Sep 12 17:43:19.751160 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:19.763050 systemd-networkd[1820]: eth0: DHCPv4 address 172.31.28.238/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:43:19.768257 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1831) Sep 12 17:43:19.768379 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:43:19.778995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:43:19.815013 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:43:19.818962 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:43:19.821951 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 12 17:43:19.841266 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:43:19.940196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:19.976671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:19.977855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:19.987992 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:43:19.993092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:43:20.004237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:43:20.007386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:20.015428 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:43:20.026156 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:43:20.032512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:43:20.045101 lvm[1938]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:43:20.078256 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:43:20.079192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:20.084410 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:43:20.092383 lvm[1944]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:43:20.122475 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:43:20.133390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:20.134316 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:43:20.134913 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:43:20.135412 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:43:20.136047 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:43:20.136551 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:43:20.136992 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:43:20.137407 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:43:20.137451 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:43:20.137879 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:43:20.139562 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:43:20.141629 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:43:20.153229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:43:20.154655 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:43:20.155306 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:43:20.155777 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:43:20.156253 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:43:20.156293 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:43:20.157722 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:43:20.162158 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:43:20.167754 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:43:20.172436 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:43:20.178117 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:43:20.178787 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:43:20.189174 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:43:20.207057 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:43:20.222259 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:43:20.230749 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:43:20.241250 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:43:20.248150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:43:20.259160 dbus-daemon[1953]: [system] SELinux support is enabled Sep 12 17:43:20.262174 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:43:20.262785 dbus-daemon[1953]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1820 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:43:20.263278 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:43:20.265200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:43:20.269890 jq[1954]: false Sep 12 17:43:20.276174 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:43:20.280044 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:43:20.282800 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:43:20.298432 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:43:20.300065 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:43:20.312419 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:43:20.312469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:43:20.314133 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:43:20.314173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:43:20.317723 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:43:20.334159 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:43:20.345463 extend-filesystems[1955]: Found loop4 Sep 12 17:43:20.345463 extend-filesystems[1955]: Found loop5 Sep 12 17:43:20.345463 extend-filesystems[1955]: Found loop6 Sep 12 17:43:20.345463 extend-filesystems[1955]: Found loop7 Sep 12 17:43:20.345463 extend-filesystems[1955]: Found nvme0n1 Sep 12 17:43:20.383185 tar[1970]: linux-amd64/helm Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p1 Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p2 Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p3 Sep 12 17:43:20.383474 extend-filesystems[1955]: Found usr Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p4 Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p6 Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p7 Sep 12 17:43:20.383474 extend-filesystems[1955]: Found nvme0n1p9 Sep 12 17:43:20.383474 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.371 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.386 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.396 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.396 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.399 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.399 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.399 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.399 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.404 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.405 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.407 INFO Fetch failed with 404: resource not found Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.409 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.409 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.410 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.414 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.415 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.416 INFO Fetch successful Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.416 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:43:20.439599 coreos-metadata[1952]: Sep 12 17:43:20.422 INFO Fetch successful Sep 12 17:43:20.348132 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: ---------------------------------------------------- Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: corporation. Support and training for ntp-4 are Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: available at https://www.nwtime.org/support Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: ---------------------------------------------------- Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: proto: precision = 0.097 usec (-23) Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: basedate set to 2025-08-31 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: gps base set to 2025-08-31 (week 2382) Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Listen normally on 3 eth0 172.31.28.238:123 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Listen normally on 4 lo [::1]:123 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: bind(21) AF_INET6 fe80::4c6:cfff:fe3f:9ebd%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: unable to create socket on eth0 (5) for fe80::4c6:cfff:fe3f:9ebd%2#123 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: failed to init interface for address fe80::4c6:cfff:fe3f:9ebd%2 Sep 12 17:43:20.452978 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Sep 12 17:43:20.466902 jq[1968]: true Sep 12 17:43:20.415061 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:43:20.348425 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:43:20.415095 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:43:20.414333 (ntainerd)[1979]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:43:20.415106 ntpd[1957]: ---------------------------------------------------- Sep 12 17:43:20.415116 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:43:20.415126 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:43:20.415135 ntpd[1957]: corporation. Support and training for ntp-4 are Sep 12 17:43:20.415146 ntpd[1957]: available at https://www.nwtime.org/support Sep 12 17:43:20.415156 ntpd[1957]: ---------------------------------------------------- Sep 12 17:43:20.427840 ntpd[1957]: proto: precision = 0.097 usec (-23) Sep 12 17:43:20.434042 ntpd[1957]: basedate set to 2025-08-31 Sep 12 17:43:20.434065 ntpd[1957]: gps base set to 2025-08-31 (week 2382) Sep 12 17:43:20.440412 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:43:20.440479 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:43:20.440677 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:43:20.440715 ntpd[1957]: Listen normally on 3 eth0 172.31.28.238:123 Sep 12 17:43:20.474546 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:20.474546 ntpd[1957]: 12 Sep 17:43:20 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:20.471706 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:43:20.440757 ntpd[1957]: Listen normally on 4 lo [::1]:123 Sep 12 17:43:20.473151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:43:20.440802 ntpd[1957]: bind(21) AF_INET6 fe80::4c6:cfff:fe3f:9ebd%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:43:20.440825 ntpd[1957]: unable to create socket on eth0 (5) for fe80::4c6:cfff:fe3f:9ebd%2#123 Sep 12 17:43:20.440841 ntpd[1957]: failed to init interface for address fe80::4c6:cfff:fe3f:9ebd%2 Sep 12 17:43:20.440873 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Sep 12 17:43:20.472893 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:20.472965 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:43:20.486889 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Sep 12 17:43:20.496851 update_engine[1967]: I20250912 17:43:20.481758 1967 main.cc:92] Flatcar Update Engine starting Sep 12 17:43:20.517994 extend-filesystems[2003]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:43:20.519192 jq[1991]: true Sep 12 17:43:20.548378 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:43:20.528490 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:43:20.548554 update_engine[1967]: I20250912 17:43:20.529340 1967 update_check_scheduler.cc:74] Next update check in 11m0s Sep 12 17:43:20.547179 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:43:20.606069 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:43:20.615666 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:43:20.622980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:43:20.655154 systemd-logind[1965]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:43:20.655183 systemd-logind[1965]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:43:20.655207 systemd-logind[1965]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:43:20.659711 systemd-logind[1965]: New seat seat0. Sep 12 17:43:20.663090 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:43:20.731298 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:43:20.748087 extend-filesystems[2003]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:43:20.748087 extend-filesystems[2003]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:43:20.748087 extend-filesystems[2003]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:43:20.750635 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:43:20.751823 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:43:20.753281 bash[2033]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:43:20.754391 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:43:20.756680 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:43:20.780075 systemd[1]: Starting sshkeys.service... Sep 12 17:43:20.817952 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1821) Sep 12 17:43:20.894493 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:43:20.903515 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:43:20.938496 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:43:20.939755 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:43:20.943900 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1976 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:43:20.955356 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:43:21.027108 polkitd[2088]: Started polkitd version 121 Sep 12 17:43:21.051507 polkitd[2088]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:43:21.051595 polkitd[2088]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:43:21.059150 polkitd[2088]: Finished loading, compiling and executing 2 rules Sep 12 17:43:21.059773 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:43:21.060047 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:43:21.062971 polkitd[2088]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:43:21.084063 containerd[1979]: time="2025-09-12T17:43:21.082329851Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:43:21.100112 systemd-hostnamed[1976]: Hostname set to (transient) Sep 12 17:43:21.100598 systemd-resolved[1776]: System hostname changed to 'ip-172-31-28-238'. Sep 12 17:43:21.104458 locksmithd[2008]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:43:21.181408 coreos-metadata[2069]: Sep 12 17:43:21.181 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:43:21.184729 coreos-metadata[2069]: Sep 12 17:43:21.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:43:21.185145 coreos-metadata[2069]: Sep 12 17:43:21.185 INFO Fetch successful Sep 12 17:43:21.185145 coreos-metadata[2069]: Sep 12 17:43:21.185 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:43:21.185950 coreos-metadata[2069]: Sep 12 17:43:21.185 INFO Fetch successful Sep 12 17:43:21.193600 unknown[2069]: wrote ssh authorized keys file for user: core Sep 12 17:43:21.242253 update-ssh-keys[2135]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:43:21.243497 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:43:21.246594 systemd[1]: Finished sshkeys.service. Sep 12 17:43:21.246908 containerd[1979]: time="2025-09-12T17:43:21.246867425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.247004 sshd_keygen[1994]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260090858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260164199Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260205890Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260434386Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260460063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260550526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260584570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260844678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260882054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261007 containerd[1979]: time="2025-09-12T17:43:21.260904031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261438 containerd[1979]: time="2025-09-12T17:43:21.260922604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.261438 containerd[1979]: time="2025-09-12T17:43:21.261165637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.264477 containerd[1979]: time="2025-09-12T17:43:21.264001508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:43:21.264477 containerd[1979]: time="2025-09-12T17:43:21.264276730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:43:21.264477 containerd[1979]: time="2025-09-12T17:43:21.264317940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:43:21.264647 containerd[1979]: time="2025-09-12T17:43:21.264481155Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:43:21.264647 containerd[1979]: time="2025-09-12T17:43:21.264556762Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.270548405Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.270631438Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.270655174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.270676996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.270696831Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.270866597Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.271358476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273146270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273191214Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273215940Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273236939Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273271330Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273290692Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.273951 containerd[1979]: time="2025-09-12T17:43:21.273312950Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273343227Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273379027Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273403870Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273423437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273453906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273500792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273522218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273543459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273562321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273582785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273600986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273621061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273643124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.274531 containerd[1979]: time="2025-09-12T17:43:21.273665197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.273685764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274471967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274510775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274544178Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274580504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274600357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274618514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274787705Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.274816571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.275127233Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.275158912Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.275174385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.275193989Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:43:21.277096 containerd[1979]: time="2025-09-12T17:43:21.275209172Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:43:21.277680 containerd[1979]: time="2025-09-12T17:43:21.275225876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:43:21.277726 containerd[1979]: time="2025-09-12T17:43:21.275634406Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:43:21.277726 containerd[1979]: time="2025-09-12T17:43:21.275718320Z" level=info msg="Connect containerd service" Sep 12 17:43:21.277726 containerd[1979]: time="2025-09-12T17:43:21.276407541Z" level=info msg="using legacy CRI server" Sep 12 17:43:21.277726 containerd[1979]: time="2025-09-12T17:43:21.276425657Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:43:21.277726 containerd[1979]: time="2025-09-12T17:43:21.276558083Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.281847338Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.281999026Z" level=info msg="Start subscribing containerd event" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.282054930Z" level=info msg="Start recovering state" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.282133791Z" level=info msg="Start event monitor" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.282154502Z" level=info msg="Start snapshots syncer" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.282171283Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:43:21.282233 containerd[1979]: time="2025-09-12T17:43:21.282182390Z" level=info msg="Start streaming server" Sep 12 17:43:21.285450 containerd[1979]: time="2025-09-12T17:43:21.283219198Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:43:21.285450 containerd[1979]: time="2025-09-12T17:43:21.283281283Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:43:21.285450 containerd[1979]: time="2025-09-12T17:43:21.283350526Z" level=info msg="containerd successfully booted in 0.207056s" Sep 12 17:43:21.283445 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:43:21.336076 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:43:21.346148 systemd-networkd[1820]: eth0: Gained IPv6LL Sep 12 17:43:21.349384 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:43:21.351834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:43:21.354714 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:43:21.366010 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:43:21.371343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:21.379321 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:43:21.390319 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:43:21.390897 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:43:21.402569 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:43:21.447702 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:43:21.462180 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:43:21.476776 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:43:21.477752 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:43:21.494455 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:43:21.496385 amazon-ssm-agent[2168]: Initializing new seelog logger Sep 12 17:43:21.497969 amazon-ssm-agent[2168]: New Seelog Logger Creation Complete Sep 12 17:43:21.497969 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.497969 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.498423 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 processing appconfig overrides Sep 12 17:43:21.499116 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.501308 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.501308 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 processing appconfig overrides Sep 12 17:43:21.501308 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.501308 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.501308 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 processing appconfig overrides Sep 12 17:43:21.502861 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO Proxy environment variables: Sep 12 17:43:21.507256 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.507256 amazon-ssm-agent[2168]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:43:21.507390 amazon-ssm-agent[2168]: 2025/09/12 17:43:21 processing appconfig overrides Sep 12 17:43:21.604239 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO https_proxy: Sep 12 17:43:21.704230 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO http_proxy: Sep 12 17:43:21.801940 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO no_proxy: Sep 12 17:43:21.802221 tar[1970]: linux-amd64/LICENSE Sep 12 17:43:21.802666 tar[1970]: linux-amd64/README.md Sep 12 17:43:21.827591 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:43:21.899597 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO Agent will take identity from EC2 Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [Registrar] Starting registrar module Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:43:21.937361 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:43:21.937953 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:43:21.998623 amazon-ssm-agent[2168]: 2025-09-12 17:43:21 INFO [CredentialRefresher] Next credential rotation will be in 30.63332744835 minutes Sep 12 17:43:22.952472 amazon-ssm-agent[2168]: 2025-09-12 17:43:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:43:23.053072 amazon-ssm-agent[2168]: 2025-09-12 17:43:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2198) started Sep 12 17:43:23.153990 amazon-ssm-agent[2168]: 2025-09-12 17:43:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:43:23.415571 ntpd[1957]: Listen normally on 6 eth0 [fe80::4c6:cfff:fe3f:9ebd%2]:123 Sep 12 17:43:23.415975 ntpd[1957]: 12 Sep 17:43:23 ntpd[1957]: Listen normally on 6 eth0 [fe80::4c6:cfff:fe3f:9ebd%2]:123 Sep 12 17:43:23.702745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:23.706679 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:43:23.707777 systemd[1]: Startup finished in 652ms (kernel) + 5.769s (initrd) + 7.264s (userspace) = 13.685s. Sep 12 17:43:23.713171 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:43:24.829320 kubelet[2213]: E0912 17:43:24.829249 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:43:24.831756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:43:24.832008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:43:24.832497 systemd[1]: kubelet.service: Consumed 1.131s CPU time. Sep 12 17:43:24.994523 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:43:25.000375 systemd[1]: Started sshd@0-172.31.28.238:22-147.75.109.163:34976.service - OpenSSH per-connection server daemon (147.75.109.163:34976). Sep 12 17:43:25.173272 sshd[2226]: Accepted publickey for core from 147.75.109.163 port 34976 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:25.175728 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:25.186310 systemd-logind[1965]: New session 1 of user core. Sep 12 17:43:25.187842 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:43:25.194235 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:43:25.208594 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:43:25.215315 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:43:25.220793 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:43:25.337294 systemd[2230]: Queued start job for default target default.target. Sep 12 17:43:25.349124 systemd[2230]: Created slice app.slice - User Application Slice. Sep 12 17:43:25.349156 systemd[2230]: Reached target paths.target - Paths. Sep 12 17:43:25.349170 systemd[2230]: Reached target timers.target - Timers. Sep 12 17:43:25.350743 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:43:25.363029 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:43:25.363808 systemd[2230]: Reached target sockets.target - Sockets. Sep 12 17:43:25.363898 systemd[2230]: Reached target basic.target - Basic System. Sep 12 17:43:25.363967 systemd[2230]: Reached target default.target - Main User Target. Sep 12 17:43:25.363997 systemd[2230]: Startup finished in 136ms. Sep 12 17:43:25.364272 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:43:25.369174 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:43:25.517244 systemd[1]: Started sshd@1-172.31.28.238:22-147.75.109.163:34992.service - OpenSSH per-connection server daemon (147.75.109.163:34992). Sep 12 17:43:25.673168 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 34992 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:25.674888 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:25.680502 systemd-logind[1965]: New session 2 of user core. Sep 12 17:43:25.690216 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:43:25.807469 sshd[2241]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:25.810291 systemd[1]: sshd@1-172.31.28.238:22-147.75.109.163:34992.service: Deactivated successfully. Sep 12 17:43:25.811860 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:43:25.813259 systemd-logind[1965]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:43:25.814450 systemd-logind[1965]: Removed session 2. Sep 12 17:43:25.851580 systemd[1]: Started sshd@2-172.31.28.238:22-147.75.109.163:34994.service - OpenSSH per-connection server daemon (147.75.109.163:34994). Sep 12 17:43:26.008649 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 34994 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:26.010347 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:26.015830 systemd-logind[1965]: New session 3 of user core. Sep 12 17:43:26.035172 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:43:26.150219 sshd[2248]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:26.153575 systemd[1]: sshd@2-172.31.28.238:22-147.75.109.163:34994.service: Deactivated successfully. Sep 12 17:43:26.155216 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:43:26.156341 systemd-logind[1965]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:43:26.157687 systemd-logind[1965]: Removed session 3. Sep 12 17:43:26.182350 systemd[1]: Started sshd@3-172.31.28.238:22-147.75.109.163:34998.service - OpenSSH per-connection server daemon (147.75.109.163:34998). Sep 12 17:43:26.337406 sshd[2255]: Accepted publickey for core from 147.75.109.163 port 34998 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:26.338922 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:26.343864 systemd-logind[1965]: New session 4 of user core. Sep 12 17:43:26.354171 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:43:26.470689 sshd[2255]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:26.474748 systemd[1]: sshd@3-172.31.28.238:22-147.75.109.163:34998.service: Deactivated successfully. Sep 12 17:43:26.476698 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:43:26.478532 systemd-logind[1965]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:43:26.479560 systemd-logind[1965]: Removed session 4. Sep 12 17:43:26.502225 systemd[1]: Started sshd@4-172.31.28.238:22-147.75.109.163:35006.service - OpenSSH per-connection server daemon (147.75.109.163:35006). Sep 12 17:43:26.663236 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 35006 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:26.664614 sshd[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:26.668985 systemd-logind[1965]: New session 5 of user core. Sep 12 17:43:26.676192 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:43:26.792715 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:43:26.793308 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:26.807809 sudo[2265]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:26.831032 sshd[2262]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:26.835013 systemd[1]: sshd@4-172.31.28.238:22-147.75.109.163:35006.service: Deactivated successfully. Sep 12 17:43:26.836732 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:43:26.837880 systemd-logind[1965]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:43:26.839297 systemd-logind[1965]: Removed session 5. Sep 12 17:43:26.864195 systemd[1]: Started sshd@5-172.31.28.238:22-147.75.109.163:35018.service - OpenSSH per-connection server daemon (147.75.109.163:35018). Sep 12 17:43:27.037207 sshd[2270]: Accepted publickey for core from 147.75.109.163 port 35018 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:27.038750 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:27.043607 systemd-logind[1965]: New session 6 of user core. Sep 12 17:43:27.050260 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:43:27.147692 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:43:27.148042 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:27.151962 sudo[2274]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:27.157638 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:43:27.157953 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:27.171283 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:43:27.175741 auditctl[2277]: No rules Sep 12 17:43:27.176194 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:43:27.176432 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:43:27.179371 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:43:27.209622 augenrules[2295]: No rules Sep 12 17:43:27.210444 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:43:27.211547 sudo[2273]: pam_unix(sudo:session): session closed for user root Sep 12 17:43:27.234135 sshd[2270]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:27.237080 systemd[1]: sshd@5-172.31.28.238:22-147.75.109.163:35018.service: Deactivated successfully. Sep 12 17:43:27.239134 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:43:27.240347 systemd-logind[1965]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:43:27.241744 systemd-logind[1965]: Removed session 6. Sep 12 17:43:27.272299 systemd[1]: Started sshd@6-172.31.28.238:22-147.75.109.163:35034.service - OpenSSH per-connection server daemon (147.75.109.163:35034). Sep 12 17:43:29.013817 systemd-resolved[1776]: Clock change detected. Flushing caches. Sep 12 17:43:29.024797 sshd[2303]: Accepted publickey for core from 147.75.109.163 port 35034 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:43:29.026309 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:29.032214 systemd-logind[1965]: New session 7 of user core. Sep 12 17:43:29.042317 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:43:29.137178 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:43:29.137466 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:43:29.497412 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:43:29.499048 (dockerd)[2322]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:43:29.879359 dockerd[2322]: time="2025-09-12T17:43:29.878513276Z" level=info msg="Starting up" Sep 12 17:43:30.070572 dockerd[2322]: time="2025-09-12T17:43:30.070165069Z" level=info msg="Loading containers: start." Sep 12 17:43:30.193236 kernel: Initializing XFRM netlink socket Sep 12 17:43:30.232786 (udev-worker)[2390]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:43:30.292410 systemd-networkd[1820]: docker0: Link UP Sep 12 17:43:30.306682 dockerd[2322]: time="2025-09-12T17:43:30.306635500Z" level=info msg="Loading containers: done." Sep 12 17:43:30.330536 dockerd[2322]: time="2025-09-12T17:43:30.330415986Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:43:30.330536 dockerd[2322]: time="2025-09-12T17:43:30.330558166Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:43:30.331009 dockerd[2322]: time="2025-09-12T17:43:30.330662183Z" level=info msg="Daemon has completed initialization" Sep 12 17:43:30.365748 dockerd[2322]: time="2025-09-12T17:43:30.365666013Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:43:30.366060 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:43:31.434286 containerd[1979]: time="2025-09-12T17:43:31.434239047Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:43:32.020931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421614533.mount: Deactivated successfully. Sep 12 17:43:33.388816 containerd[1979]: time="2025-09-12T17:43:33.388745549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:33.390155 containerd[1979]: time="2025-09-12T17:43:33.389843867Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:43:33.391116 containerd[1979]: time="2025-09-12T17:43:33.391081352Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:33.394750 containerd[1979]: time="2025-09-12T17:43:33.394338844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:33.395514 containerd[1979]: time="2025-09-12T17:43:33.395452590Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.961175197s" Sep 12 17:43:33.395514 containerd[1979]: time="2025-09-12T17:43:33.395497961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:43:33.396775 containerd[1979]: time="2025-09-12T17:43:33.396739596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:43:34.860381 containerd[1979]: time="2025-09-12T17:43:34.860331214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:34.861399 containerd[1979]: time="2025-09-12T17:43:34.861356525Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:43:34.863954 containerd[1979]: time="2025-09-12T17:43:34.862446550Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:34.865454 containerd[1979]: time="2025-09-12T17:43:34.865417603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:34.866696 containerd[1979]: time="2025-09-12T17:43:34.866658737Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.469880492s" Sep 12 17:43:34.866831 containerd[1979]: time="2025-09-12T17:43:34.866811557Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:43:34.867421 containerd[1979]: time="2025-09-12T17:43:34.867392887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:43:36.089004 containerd[1979]: time="2025-09-12T17:43:36.088953296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.095373 containerd[1979]: time="2025-09-12T17:43:36.094975326Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:43:36.103987 containerd[1979]: time="2025-09-12T17:43:36.103379198Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.114498 containerd[1979]: time="2025-09-12T17:43:36.114414004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:36.116060 containerd[1979]: time="2025-09-12T17:43:36.115994147Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.248566439s" Sep 12 17:43:36.116268 containerd[1979]: time="2025-09-12T17:43:36.116242819Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:43:36.117945 containerd[1979]: time="2025-09-12T17:43:36.117910836Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:43:36.682383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:43:36.689569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:36.997274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:37.000585 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:43:37.080287 kubelet[2539]: E0912 17:43:37.080237 2539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:43:37.084981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:43:37.085228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:43:37.305578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483406746.mount: Deactivated successfully. Sep 12 17:43:37.830888 containerd[1979]: time="2025-09-12T17:43:37.830803965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:37.832116 containerd[1979]: time="2025-09-12T17:43:37.831941536Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:43:37.833958 containerd[1979]: time="2025-09-12T17:43:37.833002094Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:37.835842 containerd[1979]: time="2025-09-12T17:43:37.834981901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:37.835842 containerd[1979]: time="2025-09-12T17:43:37.835349926Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.717406023s" Sep 12 17:43:37.835842 containerd[1979]: time="2025-09-12T17:43:37.835377140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:43:37.836223 containerd[1979]: time="2025-09-12T17:43:37.836205481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:43:38.309836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184970875.mount: Deactivated successfully. Sep 12 17:43:39.233500 containerd[1979]: time="2025-09-12T17:43:39.233431565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.234752 containerd[1979]: time="2025-09-12T17:43:39.234496507Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:43:39.236239 containerd[1979]: time="2025-09-12T17:43:39.236191484Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.239986 containerd[1979]: time="2025-09-12T17:43:39.239410855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.240660 containerd[1979]: time="2025-09-12T17:43:39.240631847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.404353373s" Sep 12 17:43:39.240720 containerd[1979]: time="2025-09-12T17:43:39.240665475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:43:39.241539 containerd[1979]: time="2025-09-12T17:43:39.241508825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:43:39.688974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517557981.mount: Deactivated successfully. Sep 12 17:43:39.695521 containerd[1979]: time="2025-09-12T17:43:39.695467666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.696591 containerd[1979]: time="2025-09-12T17:43:39.696521208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:43:39.698984 containerd[1979]: time="2025-09-12T17:43:39.697696885Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.701331 containerd[1979]: time="2025-09-12T17:43:39.700468950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:39.701331 containerd[1979]: time="2025-09-12T17:43:39.701187860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.642317ms" Sep 12 17:43:39.701331 containerd[1979]: time="2025-09-12T17:43:39.701225333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:43:39.702183 containerd[1979]: time="2025-09-12T17:43:39.702155271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:43:40.293680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3729512947.mount: Deactivated successfully. Sep 12 17:43:42.367235 containerd[1979]: time="2025-09-12T17:43:42.367174713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:42.373540 containerd[1979]: time="2025-09-12T17:43:42.373454655Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:43:42.381855 containerd[1979]: time="2025-09-12T17:43:42.381774327Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:42.389145 containerd[1979]: time="2025-09-12T17:43:42.389063171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:43:42.390258 containerd[1979]: time="2025-09-12T17:43:42.389986766Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.687798018s" Sep 12 17:43:42.390258 containerd[1979]: time="2025-09-12T17:43:42.390054622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:43:45.035971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:45.049345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:45.076078 systemd[1]: Reloading requested from client PID 2687 ('systemctl') (unit session-7.scope)... Sep 12 17:43:45.076100 systemd[1]: Reloading... Sep 12 17:43:45.172105 zram_generator::config[2724]: No configuration found. Sep 12 17:43:45.357876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:43:45.446592 systemd[1]: Reloading finished in 370 ms. Sep 12 17:43:45.494357 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:43:45.494453 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:43:45.494773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:45.500356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:45.724133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:45.729840 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:43:45.790440 kubelet[2791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:45.790440 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:43:45.790440 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:45.793040 kubelet[2791]: I0912 17:43:45.792965 2791 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:43:46.283421 kubelet[2791]: I0912 17:43:46.283363 2791 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:43:46.283421 kubelet[2791]: I0912 17:43:46.283400 2791 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:43:46.283884 kubelet[2791]: I0912 17:43:46.283847 2791 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:43:46.321249 kubelet[2791]: I0912 17:43:46.321191 2791 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:43:46.324069 kubelet[2791]: E0912 17:43:46.323904 2791 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:46.337924 kubelet[2791]: E0912 17:43:46.337887 2791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:43:46.337924 kubelet[2791]: I0912 17:43:46.337918 2791 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:43:46.349827 kubelet[2791]: I0912 17:43:46.349071 2791 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:43:46.349827 kubelet[2791]: I0912 17:43:46.349208 2791 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:43:46.349827 kubelet[2791]: I0912 17:43:46.349348 2791 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:43:46.349827 kubelet[2791]: I0912 17:43:46.349371 2791 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-238","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:43:46.350127 kubelet[2791]: I0912 17:43:46.349601 2791 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:43:46.350127 kubelet[2791]: I0912 17:43:46.349610 2791 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:43:46.350127 kubelet[2791]: I0912 17:43:46.349712 2791 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:46.354422 kubelet[2791]: I0912 17:43:46.354365 2791 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:43:46.354422 kubelet[2791]: I0912 17:43:46.354417 2791 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:43:46.355207 kubelet[2791]: I0912 17:43:46.354458 2791 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:43:46.355207 kubelet[2791]: I0912 17:43:46.354479 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:43:46.358001 kubelet[2791]: W0912 17:43:46.356526 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-238&limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:46.358001 kubelet[2791]: E0912 17:43:46.356623 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-238&limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:46.358001 kubelet[2791]: W0912 17:43:46.357815 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:46.358001 kubelet[2791]: E0912 17:43:46.357877 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:46.358664 kubelet[2791]: I0912 17:43:46.358645 2791 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:43:46.363496 kubelet[2791]: I0912 17:43:46.363466 2791 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:43:46.363846 kubelet[2791]: W0912 17:43:46.363831 2791 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:43:46.365810 kubelet[2791]: I0912 17:43:46.365782 2791 server.go:1274] "Started kubelet" Sep 12 17:43:46.369051 kubelet[2791]: I0912 17:43:46.367770 2791 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:43:46.369051 kubelet[2791]: I0912 17:43:46.368787 2791 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:43:46.372690 kubelet[2791]: I0912 17:43:46.371868 2791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:43:46.372690 kubelet[2791]: I0912 17:43:46.372184 2791 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:43:46.374343 kubelet[2791]: I0912 17:43:46.374325 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:43:46.374634 kubelet[2791]: E0912 17:43:46.372372 2791 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.238:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.238:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-238.186499f3a894ba89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-238,UID:ip-172-31-28-238,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-238,},FirstTimestamp:2025-09-12 17:43:46.365758089 +0000 UTC m=+0.632049033,LastTimestamp:2025-09-12 17:43:46.365758089 +0000 UTC m=+0.632049033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-238,}" Sep 12 17:43:46.374824 kubelet[2791]: I0912 17:43:46.374805 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:43:46.384112 kubelet[2791]: E0912 17:43:46.384074 2791 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-238\" not found" Sep 12 17:43:46.384250 kubelet[2791]: I0912 17:43:46.384140 2791 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:43:46.384299 kubelet[2791]: I0912 17:43:46.384264 2791 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:43:46.384346 kubelet[2791]: I0912 17:43:46.384318 2791 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:43:46.385099 kubelet[2791]: W0912 17:43:46.385010 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:46.385203 kubelet[2791]: E0912 17:43:46.385114 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:46.385258 kubelet[2791]: E0912 17:43:46.385192 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-238?timeout=10s\": dial tcp 172.31.28.238:6443: connect: connection refused" interval="200ms" Sep 12 17:43:46.385424 kubelet[2791]: I0912 17:43:46.385402 2791 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:43:46.385516 kubelet[2791]: I0912 17:43:46.385496 2791 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:43:46.387864 kubelet[2791]: I0912 17:43:46.387842 2791 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:43:46.395243 kubelet[2791]: E0912 17:43:46.395210 2791 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:43:46.400390 kubelet[2791]: I0912 17:43:46.399760 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:43:46.401933 kubelet[2791]: I0912 17:43:46.401902 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:43:46.402524 kubelet[2791]: I0912 17:43:46.402095 2791 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:43:46.402524 kubelet[2791]: I0912 17:43:46.402123 2791 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:43:46.402524 kubelet[2791]: E0912 17:43:46.402180 2791 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:43:46.414114 kubelet[2791]: W0912 17:43:46.413999 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:46.414278 kubelet[2791]: E0912 17:43:46.414126 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:46.419742 kubelet[2791]: I0912 17:43:46.419723 2791 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:43:46.420368 kubelet[2791]: I0912 17:43:46.419869 2791 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:43:46.420368 kubelet[2791]: I0912 17:43:46.419885 2791 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:46.425620 kubelet[2791]: I0912 17:43:46.425589 2791 policy_none.go:49] "None policy: Start" Sep 12 17:43:46.426352 kubelet[2791]: I0912 17:43:46.426256 2791 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:43:46.426352 kubelet[2791]: I0912 17:43:46.426348 2791 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:43:46.437281 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:43:46.449215 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:43:46.452361 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:43:46.464615 kubelet[2791]: I0912 17:43:46.464337 2791 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:43:46.464615 kubelet[2791]: I0912 17:43:46.464574 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:43:46.465089 kubelet[2791]: I0912 17:43:46.465046 2791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:43:46.467975 kubelet[2791]: I0912 17:43:46.467954 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:43:46.469333 kubelet[2791]: E0912 17:43:46.468574 2791 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-238\" not found" Sep 12 17:43:46.514706 systemd[1]: Created slice kubepods-burstable-podb88eb03d76e02ae917cfbf2d417e5f52.slice - libcontainer container kubepods-burstable-podb88eb03d76e02ae917cfbf2d417e5f52.slice. Sep 12 17:43:46.534784 systemd[1]: Created slice kubepods-burstable-pod4926fa18dece13fb3aad38c9c42ea56b.slice - libcontainer container kubepods-burstable-pod4926fa18dece13fb3aad38c9c42ea56b.slice. Sep 12 17:43:46.540741 systemd[1]: Created slice kubepods-burstable-podfea583ce9d8b7e72b3a2e1ed5d546c72.slice - libcontainer container kubepods-burstable-podfea583ce9d8b7e72b3a2e1ed5d546c72.slice. Sep 12 17:43:46.572274 kubelet[2791]: I0912 17:43:46.572211 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-238" Sep 12 17:43:46.572684 kubelet[2791]: E0912 17:43:46.572587 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.238:6443/api/v1/nodes\": dial tcp 172.31.28.238:6443: connect: connection refused" node="ip-172-31-28-238" Sep 12 17:43:46.586095 kubelet[2791]: I0912 17:43:46.586056 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:46.586389 kubelet[2791]: I0912 17:43:46.586271 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:46.586389 kubelet[2791]: I0912 17:43:46.586297 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:46.586389 kubelet[2791]: I0912 17:43:46.586317 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:46.586389 kubelet[2791]: E0912 17:43:46.586290 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-238?timeout=10s\": dial tcp 172.31.28.238:6443: connect: connection refused" interval="400ms" Sep 12 17:43:46.586389 kubelet[2791]: I0912 17:43:46.586352 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fea583ce9d8b7e72b3a2e1ed5d546c72-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-238\" (UID: \"fea583ce9d8b7e72b3a2e1ed5d546c72\") " pod="kube-system/kube-scheduler-ip-172-31-28-238" Sep 12 17:43:46.586616 kubelet[2791]: I0912 17:43:46.586385 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b88eb03d76e02ae917cfbf2d417e5f52-ca-certs\") pod \"kube-apiserver-ip-172-31-28-238\" (UID: \"b88eb03d76e02ae917cfbf2d417e5f52\") " pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:46.586616 kubelet[2791]: I0912 17:43:46.586405 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b88eb03d76e02ae917cfbf2d417e5f52-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-238\" (UID: \"b88eb03d76e02ae917cfbf2d417e5f52\") " pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:46.586616 kubelet[2791]: I0912 17:43:46.586425 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:46.586616 kubelet[2791]: I0912 17:43:46.586583 2791 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b88eb03d76e02ae917cfbf2d417e5f52-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-238\" (UID: \"b88eb03d76e02ae917cfbf2d417e5f52\") " pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:46.775479 kubelet[2791]: I0912 17:43:46.775445 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-238" Sep 12 17:43:46.775974 kubelet[2791]: E0912 17:43:46.775946 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.238:6443/api/v1/nodes\": dial tcp 172.31.28.238:6443: connect: connection refused" node="ip-172-31-28-238" Sep 12 17:43:46.833964 containerd[1979]: time="2025-09-12T17:43:46.833858323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-238,Uid:b88eb03d76e02ae917cfbf2d417e5f52,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:46.844627 containerd[1979]: time="2025-09-12T17:43:46.844556755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-238,Uid:4926fa18dece13fb3aad38c9c42ea56b,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:46.845157 containerd[1979]: time="2025-09-12T17:43:46.845120797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-238,Uid:fea583ce9d8b7e72b3a2e1ed5d546c72,Namespace:kube-system,Attempt:0,}" Sep 12 17:43:46.987110 kubelet[2791]: E0912 17:43:46.987058 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-238?timeout=10s\": dial tcp 172.31.28.238:6443: connect: connection refused" interval="800ms" Sep 12 17:43:47.178352 kubelet[2791]: I0912 17:43:47.178249 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-238" Sep 12 17:43:47.178576 kubelet[2791]: E0912 17:43:47.178547 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.238:6443/api/v1/nodes\": dial tcp 172.31.28.238:6443: connect: connection refused" node="ip-172-31-28-238" Sep 12 17:43:47.305193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995110213.mount: Deactivated successfully. Sep 12 17:43:47.321255 containerd[1979]: time="2025-09-12T17:43:47.321181550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:47.323296 containerd[1979]: time="2025-09-12T17:43:47.323251034Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:47.325089 containerd[1979]: time="2025-09-12T17:43:47.325033717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:43:47.327276 containerd[1979]: time="2025-09-12T17:43:47.327237217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:43:47.329242 containerd[1979]: time="2025-09-12T17:43:47.329169996Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:47.332693 containerd[1979]: time="2025-09-12T17:43:47.331756006Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:47.335466 containerd[1979]: time="2025-09-12T17:43:47.335417536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:43:47.337674 containerd[1979]: time="2025-09-12T17:43:47.337625756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:43:47.338597 containerd[1979]: time="2025-09-12T17:43:47.338399929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.461878ms" Sep 12 17:43:47.342045 containerd[1979]: time="2025-09-12T17:43:47.340536461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.900642ms" Sep 12 17:43:47.342726 containerd[1979]: time="2025-09-12T17:43:47.342690888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.521123ms" Sep 12 17:43:47.541141 kubelet[2791]: W0912 17:43:47.540968 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:47.541141 kubelet[2791]: E0912 17:43:47.541061 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:47.568122 containerd[1979]: time="2025-09-12T17:43:47.567283661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:47.568122 containerd[1979]: time="2025-09-12T17:43:47.567343656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:47.568122 containerd[1979]: time="2025-09-12T17:43:47.567363592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:47.568122 containerd[1979]: time="2025-09-12T17:43:47.567448112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:47.569841 containerd[1979]: time="2025-09-12T17:43:47.569183401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:47.569841 containerd[1979]: time="2025-09-12T17:43:47.569618490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:47.569841 containerd[1979]: time="2025-09-12T17:43:47.569634723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:47.569841 containerd[1979]: time="2025-09-12T17:43:47.569714268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:47.576272 containerd[1979]: time="2025-09-12T17:43:47.575779319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:43:47.576585 containerd[1979]: time="2025-09-12T17:43:47.576467958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:43:47.577823 containerd[1979]: time="2025-09-12T17:43:47.577621770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:47.577823 containerd[1979]: time="2025-09-12T17:43:47.577718570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:43:47.594197 systemd[1]: Started cri-containerd-9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5.scope - libcontainer container 9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5. Sep 12 17:43:47.610225 systemd[1]: Started cri-containerd-2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1.scope - libcontainer container 2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1. Sep 12 17:43:47.614830 systemd[1]: Started cri-containerd-7624652f64146d3f0e833017598f4e94586b6abe8ca13da6e2b37edeed162233.scope - libcontainer container 7624652f64146d3f0e833017598f4e94586b6abe8ca13da6e2b37edeed162233. Sep 12 17:43:47.675527 containerd[1979]: time="2025-09-12T17:43:47.675495335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-238,Uid:4926fa18dece13fb3aad38c9c42ea56b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5\"" Sep 12 17:43:47.680856 containerd[1979]: time="2025-09-12T17:43:47.680462697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-238,Uid:fea583ce9d8b7e72b3a2e1ed5d546c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1\"" Sep 12 17:43:47.686848 kubelet[2791]: W0912 17:43:47.686797 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:47.687089 kubelet[2791]: E0912 17:43:47.687070 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:47.689366 containerd[1979]: time="2025-09-12T17:43:47.689326316Z" level=info msg="CreateContainer within sandbox \"9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:43:47.689480 containerd[1979]: time="2025-09-12T17:43:47.689345069Z" level=info msg="CreateContainer within sandbox \"2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:43:47.692076 kubelet[2791]: W0912 17:43:47.692005 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-238&limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:47.692232 kubelet[2791]: E0912 17:43:47.692213 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-238&limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:47.699005 containerd[1979]: time="2025-09-12T17:43:47.698971641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-238,Uid:b88eb03d76e02ae917cfbf2d417e5f52,Namespace:kube-system,Attempt:0,} returns sandbox id \"7624652f64146d3f0e833017598f4e94586b6abe8ca13da6e2b37edeed162233\"" Sep 12 17:43:47.702260 containerd[1979]: time="2025-09-12T17:43:47.702154646Z" level=info msg="CreateContainer within sandbox \"7624652f64146d3f0e833017598f4e94586b6abe8ca13da6e2b37edeed162233\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:43:47.719896 kubelet[2791]: W0912 17:43:47.719771 2791 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.238:6443: connect: connection refused Sep 12 17:43:47.719896 kubelet[2791]: E0912 17:43:47.719855 2791 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:47.732271 containerd[1979]: time="2025-09-12T17:43:47.732230350Z" level=info msg="CreateContainer within sandbox \"2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f\"" Sep 12 17:43:47.734046 containerd[1979]: time="2025-09-12T17:43:47.733058611Z" level=info msg="StartContainer for \"facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f\"" Sep 12 17:43:47.741597 containerd[1979]: time="2025-09-12T17:43:47.741560256Z" level=info msg="CreateContainer within sandbox \"9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a\"" Sep 12 17:43:47.743104 containerd[1979]: time="2025-09-12T17:43:47.743075559Z" level=info msg="StartContainer for \"d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a\"" Sep 12 17:43:47.752815 containerd[1979]: time="2025-09-12T17:43:47.752773564Z" level=info msg="CreateContainer within sandbox \"7624652f64146d3f0e833017598f4e94586b6abe8ca13da6e2b37edeed162233\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95b6b8f6a7f96dab0de034252054ff1e222d30f851570e33dd17ee534821cc9d\"" Sep 12 17:43:47.753448 containerd[1979]: time="2025-09-12T17:43:47.753421813Z" level=info msg="StartContainer for \"95b6b8f6a7f96dab0de034252054ff1e222d30f851570e33dd17ee534821cc9d\"" Sep 12 17:43:47.769208 systemd[1]: Started cri-containerd-facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f.scope - libcontainer container facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f. Sep 12 17:43:47.783223 systemd[1]: Started cri-containerd-d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a.scope - libcontainer container d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a. Sep 12 17:43:47.788365 kubelet[2791]: E0912 17:43:47.788055 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-238?timeout=10s\": dial tcp 172.31.28.238:6443: connect: connection refused" interval="1.6s" Sep 12 17:43:47.818308 systemd[1]: Started cri-containerd-95b6b8f6a7f96dab0de034252054ff1e222d30f851570e33dd17ee534821cc9d.scope - libcontainer container 95b6b8f6a7f96dab0de034252054ff1e222d30f851570e33dd17ee534821cc9d. Sep 12 17:43:47.879763 containerd[1979]: time="2025-09-12T17:43:47.879206254Z" level=info msg="StartContainer for \"facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f\" returns successfully" Sep 12 17:43:47.906280 containerd[1979]: time="2025-09-12T17:43:47.906226953Z" level=info msg="StartContainer for \"d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a\" returns successfully" Sep 12 17:43:47.914800 containerd[1979]: time="2025-09-12T17:43:47.914744469Z" level=info msg="StartContainer for \"95b6b8f6a7f96dab0de034252054ff1e222d30f851570e33dd17ee534821cc9d\" returns successfully" Sep 12 17:43:47.983292 kubelet[2791]: I0912 17:43:47.983260 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-238" Sep 12 17:43:47.984037 kubelet[2791]: E0912 17:43:47.983991 2791 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.238:6443/api/v1/nodes\": dial tcp 172.31.28.238:6443: connect: connection refused" node="ip-172-31-28-238" Sep 12 17:43:48.480128 kubelet[2791]: E0912 17:43:48.480004 2791 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:43:49.587969 kubelet[2791]: I0912 17:43:49.587937 2791 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-238" Sep 12 17:43:50.740453 kubelet[2791]: E0912 17:43:50.740393 2791 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-238\" not found" node="ip-172-31-28-238" Sep 12 17:43:50.785350 kubelet[2791]: E0912 17:43:50.785170 2791 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-238.186499f3a894ba89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-238,UID:ip-172-31-28-238,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-238,},FirstTimestamp:2025-09-12 17:43:46.365758089 +0000 UTC m=+0.632049033,LastTimestamp:2025-09-12 17:43:46.365758089 +0000 UTC m=+0.632049033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-238,}" Sep 12 17:43:50.855043 kubelet[2791]: E0912 17:43:50.853151 2791 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-238.186499f3aa55e3e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-238,UID:ip-172-31-28-238,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-28-238,},FirstTimestamp:2025-09-12 17:43:46.395194343 +0000 UTC m=+0.661485305,LastTimestamp:2025-09-12 17:43:46.395194343 +0000 UTC m=+0.661485305,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-238,}" Sep 12 17:43:50.863180 kubelet[2791]: I0912 17:43:50.863136 2791 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-238" Sep 12 17:43:50.863331 kubelet[2791]: E0912 17:43:50.863208 2791 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-238\": node \"ip-172-31-28-238\" not found" Sep 12 17:43:51.359721 kubelet[2791]: I0912 17:43:51.359674 2791 apiserver.go:52] "Watching apiserver" Sep 12 17:43:51.385286 kubelet[2791]: I0912 17:43:51.385251 2791 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:43:52.720247 systemd[1]: Reloading requested from client PID 3069 ('systemctl') (unit session-7.scope)... Sep 12 17:43:52.720267 systemd[1]: Reloading... Sep 12 17:43:52.861053 zram_generator::config[3110]: No configuration found. Sep 12 17:43:53.001218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:43:53.105251 systemd[1]: Reloading finished in 384 ms. Sep 12 17:43:53.135480 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:43:53.147944 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:53.148369 kubelet[2791]: I0912 17:43:53.148213 2791 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:43:53.160528 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:43:53.160831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:53.160905 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 126.9M memory peak, 0B memory swap peak. Sep 12 17:43:53.166548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:43:53.408746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:43:53.425573 (kubelet)[3173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:43:53.496907 kubelet[3173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:53.496907 kubelet[3173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:43:53.496907 kubelet[3173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:43:53.496907 kubelet[3173]: I0912 17:43:53.496710 3173 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:43:53.508493 kubelet[3173]: I0912 17:43:53.508451 3173 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:43:53.508493 kubelet[3173]: I0912 17:43:53.508481 3173 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:43:53.508805 kubelet[3173]: I0912 17:43:53.508783 3173 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:43:53.510183 kubelet[3173]: I0912 17:43:53.510149 3173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:43:53.512236 kubelet[3173]: I0912 17:43:53.512067 3173 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:43:53.515306 kubelet[3173]: E0912 17:43:53.515262 3173 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:43:53.515306 kubelet[3173]: I0912 17:43:53.515304 3173 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:43:53.521048 kubelet[3173]: I0912 17:43:53.519151 3173 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:43:53.521048 kubelet[3173]: I0912 17:43:53.520470 3173 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:43:53.521048 kubelet[3173]: I0912 17:43:53.520639 3173 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:43:53.521268 kubelet[3173]: I0912 17:43:53.520679 3173 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-238","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:43:53.521268 kubelet[3173]: I0912 17:43:53.521097 3173 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:43:53.521268 kubelet[3173]: I0912 17:43:53.521111 3173 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:43:53.521268 kubelet[3173]: I0912 17:43:53.521146 3173 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:53.521268 kubelet[3173]: I0912 17:43:53.521267 3173 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:43:53.521539 kubelet[3173]: I0912 17:43:53.521284 3173 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:43:53.521539 kubelet[3173]: I0912 17:43:53.521324 3173 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:43:53.521539 kubelet[3173]: I0912 17:43:53.521340 3173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:43:53.524820 kubelet[3173]: I0912 17:43:53.524793 3173 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:43:53.525351 kubelet[3173]: I0912 17:43:53.525330 3173 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:43:53.525906 kubelet[3173]: I0912 17:43:53.525887 3173 server.go:1274] "Started kubelet" Sep 12 17:43:53.531445 kubelet[3173]: I0912 17:43:53.531409 3173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:43:53.543048 kubelet[3173]: I0912 17:43:53.541451 3173 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:43:53.543048 kubelet[3173]: I0912 17:43:53.541603 3173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:43:53.543235 kubelet[3173]: I0912 17:43:53.543185 3173 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:43:53.546930 kubelet[3173]: I0912 17:43:53.545926 3173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:43:53.546930 kubelet[3173]: I0912 17:43:53.546157 3173 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:43:53.551044 kubelet[3173]: I0912 17:43:53.550646 3173 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:43:53.551044 kubelet[3173]: I0912 17:43:53.550758 3173 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:43:53.551044 kubelet[3173]: I0912 17:43:53.550889 3173 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:43:53.553043 kubelet[3173]: E0912 17:43:53.551388 3173 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-238\" not found" Sep 12 17:43:53.558526 kubelet[3173]: I0912 17:43:53.558477 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:43:53.559958 kubelet[3173]: I0912 17:43:53.559931 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:43:53.560092 kubelet[3173]: I0912 17:43:53.559970 3173 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:43:53.560092 kubelet[3173]: I0912 17:43:53.559994 3173 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:43:53.560092 kubelet[3173]: E0912 17:43:53.560053 3173 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:43:53.564237 kubelet[3173]: I0912 17:43:53.564212 3173 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:43:53.564521 kubelet[3173]: I0912 17:43:53.564497 3173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:43:53.572538 kubelet[3173]: E0912 17:43:53.570527 3173 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:43:53.574131 kubelet[3173]: I0912 17:43:53.572970 3173 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:43:53.635758 kubelet[3173]: I0912 17:43:53.635722 3173 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:43:53.635758 kubelet[3173]: I0912 17:43:53.635743 3173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:43:53.635758 kubelet[3173]: I0912 17:43:53.635765 3173 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:43:53.635988 kubelet[3173]: I0912 17:43:53.635951 3173 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:43:53.635988 kubelet[3173]: I0912 17:43:53.635966 3173 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:43:53.636100 kubelet[3173]: I0912 17:43:53.635991 3173 policy_none.go:49] "None policy: Start" Sep 12 17:43:53.637052 kubelet[3173]: I0912 17:43:53.636715 3173 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:43:53.637052 kubelet[3173]: I0912 17:43:53.636741 3173 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:43:53.637052 kubelet[3173]: I0912 17:43:53.636879 3173 state_mem.go:75] "Updated machine memory state" Sep 12 17:43:53.642449 kubelet[3173]: I0912 17:43:53.641587 3173 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:43:53.642449 kubelet[3173]: I0912 17:43:53.641770 3173 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:43:53.642449 kubelet[3173]: I0912 17:43:53.641783 3173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:43:53.642449 kubelet[3173]: I0912 17:43:53.642007 3173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:43:53.674103 kubelet[3173]: E0912 17:43:53.673899 3173 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-238\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:53.744322 kubelet[3173]: I0912 17:43:53.744286 3173 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-238" Sep 12 17:43:53.757859 kubelet[3173]: I0912 17:43:53.757801 3173 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-28-238" Sep 12 17:43:53.758065 kubelet[3173]: I0912 17:43:53.757879 3173 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-238" Sep 12 17:43:53.853244 kubelet[3173]: I0912 17:43:53.853198 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:53.853244 kubelet[3173]: I0912 17:43:53.853244 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:53.853244 kubelet[3173]: I0912 17:43:53.853268 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b88eb03d76e02ae917cfbf2d417e5f52-ca-certs\") pod \"kube-apiserver-ip-172-31-28-238\" (UID: \"b88eb03d76e02ae917cfbf2d417e5f52\") " pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:53.854222 kubelet[3173]: I0912 17:43:53.853287 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:53.854222 kubelet[3173]: I0912 17:43:53.853315 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:53.854222 kubelet[3173]: I0912 17:43:53.853330 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4926fa18dece13fb3aad38c9c42ea56b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-238\" (UID: \"4926fa18dece13fb3aad38c9c42ea56b\") " pod="kube-system/kube-controller-manager-ip-172-31-28-238" Sep 12 17:43:53.854222 kubelet[3173]: I0912 17:43:53.853348 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fea583ce9d8b7e72b3a2e1ed5d546c72-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-238\" (UID: \"fea583ce9d8b7e72b3a2e1ed5d546c72\") " pod="kube-system/kube-scheduler-ip-172-31-28-238" Sep 12 17:43:53.854222 kubelet[3173]: I0912 17:43:53.853363 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b88eb03d76e02ae917cfbf2d417e5f52-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-238\" (UID: \"b88eb03d76e02ae917cfbf2d417e5f52\") " pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:53.854367 kubelet[3173]: I0912 17:43:53.853378 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b88eb03d76e02ae917cfbf2d417e5f52-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-238\" (UID: \"b88eb03d76e02ae917cfbf2d417e5f52\") " pod="kube-system/kube-apiserver-ip-172-31-28-238" Sep 12 17:43:54.524549 kubelet[3173]: I0912 17:43:54.524503 3173 apiserver.go:52] "Watching apiserver" Sep 12 17:43:54.553062 kubelet[3173]: I0912 17:43:54.552061 3173 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:43:54.654344 kubelet[3173]: I0912 17:43:54.654268 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-238" podStartSLOduration=1.654246299 podStartE2EDuration="1.654246299s" podCreationTimestamp="2025-09-12 17:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:54.635234927 +0000 UTC m=+1.197874136" watchObservedRunningTime="2025-09-12 17:43:54.654246299 +0000 UTC m=+1.216885505" Sep 12 17:43:54.676374 kubelet[3173]: I0912 17:43:54.676296 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-238" podStartSLOduration=1.676278331 podStartE2EDuration="1.676278331s" podCreationTimestamp="2025-09-12 17:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:54.654661355 +0000 UTC m=+1.217300558" watchObservedRunningTime="2025-09-12 17:43:54.676278331 +0000 UTC m=+1.238917528" Sep 12 17:43:54.693929 kubelet[3173]: I0912 17:43:54.693870 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-238" podStartSLOduration=2.693849528 podStartE2EDuration="2.693849528s" podCreationTimestamp="2025-09-12 17:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:43:54.677062949 +0000 UTC m=+1.239702257" watchObservedRunningTime="2025-09-12 17:43:54.693849528 +0000 UTC m=+1.256488725" Sep 12 17:43:59.331014 kubelet[3173]: I0912 17:43:59.330982 3173 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:43:59.331771 containerd[1979]: time="2025-09-12T17:43:59.331732800Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:43:59.332362 kubelet[3173]: I0912 17:43:59.331959 3173 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:44:00.198042 systemd[1]: Created slice kubepods-besteffort-podd79ee9a3_0f9c_4648_abba_0b4502fca56d.slice - libcontainer container kubepods-besteffort-podd79ee9a3_0f9c_4648_abba_0b4502fca56d.slice. Sep 12 17:44:00.293873 kubelet[3173]: I0912 17:44:00.293834 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d79ee9a3-0f9c-4648-abba-0b4502fca56d-var-lib-calico\") pod \"tigera-operator-58fc44c59b-xqr5s\" (UID: \"d79ee9a3-0f9c-4648-abba-0b4502fca56d\") " pod="tigera-operator/tigera-operator-58fc44c59b-xqr5s" Sep 12 17:44:00.293873 kubelet[3173]: I0912 17:44:00.293872 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56ql\" (UniqueName: \"kubernetes.io/projected/d79ee9a3-0f9c-4648-abba-0b4502fca56d-kube-api-access-d56ql\") pod \"tigera-operator-58fc44c59b-xqr5s\" (UID: \"d79ee9a3-0f9c-4648-abba-0b4502fca56d\") " pod="tigera-operator/tigera-operator-58fc44c59b-xqr5s" Sep 12 17:44:00.342588 systemd[1]: Created slice kubepods-besteffort-pod2da27dc2_474f_44b9_96d6_36dd88efb3dd.slice - libcontainer container kubepods-besteffort-pod2da27dc2_474f_44b9_96d6_36dd88efb3dd.slice. Sep 12 17:44:00.395165 kubelet[3173]: I0912 17:44:00.395114 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5sl\" (UniqueName: \"kubernetes.io/projected/2da27dc2-474f-44b9-96d6-36dd88efb3dd-kube-api-access-dg5sl\") pod \"kube-proxy-z4cb9\" (UID: \"2da27dc2-474f-44b9-96d6-36dd88efb3dd\") " pod="kube-system/kube-proxy-z4cb9" Sep 12 17:44:00.395165 kubelet[3173]: I0912 17:44:00.395160 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2da27dc2-474f-44b9-96d6-36dd88efb3dd-xtables-lock\") pod \"kube-proxy-z4cb9\" (UID: \"2da27dc2-474f-44b9-96d6-36dd88efb3dd\") " pod="kube-system/kube-proxy-z4cb9" Sep 12 17:44:00.395165 kubelet[3173]: I0912 17:44:00.395177 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2da27dc2-474f-44b9-96d6-36dd88efb3dd-lib-modules\") pod \"kube-proxy-z4cb9\" (UID: \"2da27dc2-474f-44b9-96d6-36dd88efb3dd\") " pod="kube-system/kube-proxy-z4cb9" Sep 12 17:44:00.395786 kubelet[3173]: I0912 17:44:00.395206 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2da27dc2-474f-44b9-96d6-36dd88efb3dd-kube-proxy\") pod \"kube-proxy-z4cb9\" (UID: \"2da27dc2-474f-44b9-96d6-36dd88efb3dd\") " pod="kube-system/kube-proxy-z4cb9" Sep 12 17:44:00.512442 containerd[1979]: time="2025-09-12T17:44:00.510738426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xqr5s,Uid:d79ee9a3-0f9c-4648-abba-0b4502fca56d,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:44:00.556690 containerd[1979]: time="2025-09-12T17:44:00.556129175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:00.556690 containerd[1979]: time="2025-09-12T17:44:00.556205145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:00.556690 containerd[1979]: time="2025-09-12T17:44:00.556227173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:00.556690 containerd[1979]: time="2025-09-12T17:44:00.556361574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:00.585251 systemd[1]: Started cri-containerd-b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b.scope - libcontainer container b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b. Sep 12 17:44:00.637011 containerd[1979]: time="2025-09-12T17:44:00.636953938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xqr5s,Uid:d79ee9a3-0f9c-4648-abba-0b4502fca56d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b\"" Sep 12 17:44:00.641730 containerd[1979]: time="2025-09-12T17:44:00.641594777Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:44:00.647272 containerd[1979]: time="2025-09-12T17:44:00.647232506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4cb9,Uid:2da27dc2-474f-44b9-96d6-36dd88efb3dd,Namespace:kube-system,Attempt:0,}" Sep 12 17:44:00.682425 containerd[1979]: time="2025-09-12T17:44:00.679819208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:00.682425 containerd[1979]: time="2025-09-12T17:44:00.679998242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:00.682425 containerd[1979]: time="2025-09-12T17:44:00.680119803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:00.682425 containerd[1979]: time="2025-09-12T17:44:00.680319163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:00.703273 systemd[1]: Started cri-containerd-84c652659bfc4691f212199c9d3cacd648f680406cf045f4035a2b03e5450fc7.scope - libcontainer container 84c652659bfc4691f212199c9d3cacd648f680406cf045f4035a2b03e5450fc7. Sep 12 17:44:00.735181 containerd[1979]: time="2025-09-12T17:44:00.735094179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4cb9,Uid:2da27dc2-474f-44b9-96d6-36dd88efb3dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"84c652659bfc4691f212199c9d3cacd648f680406cf045f4035a2b03e5450fc7\"" Sep 12 17:44:00.740440 containerd[1979]: time="2025-09-12T17:44:00.740394653Z" level=info msg="CreateContainer within sandbox \"84c652659bfc4691f212199c9d3cacd648f680406cf045f4035a2b03e5450fc7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:44:00.768419 containerd[1979]: time="2025-09-12T17:44:00.768189848Z" level=info msg="CreateContainer within sandbox \"84c652659bfc4691f212199c9d3cacd648f680406cf045f4035a2b03e5450fc7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d103b70a831e61740eeb0f24778aa2993751bd49e5e7e5913f686ddf88eaa06d\"" Sep 12 17:44:00.769177 containerd[1979]: time="2025-09-12T17:44:00.769070319Z" level=info msg="StartContainer for \"d103b70a831e61740eeb0f24778aa2993751bd49e5e7e5913f686ddf88eaa06d\"" Sep 12 17:44:00.803281 systemd[1]: Started cri-containerd-d103b70a831e61740eeb0f24778aa2993751bd49e5e7e5913f686ddf88eaa06d.scope - libcontainer container d103b70a831e61740eeb0f24778aa2993751bd49e5e7e5913f686ddf88eaa06d. Sep 12 17:44:00.835507 containerd[1979]: time="2025-09-12T17:44:00.835466206Z" level=info msg="StartContainer for \"d103b70a831e61740eeb0f24778aa2993751bd49e5e7e5913f686ddf88eaa06d\" returns successfully" Sep 12 17:44:02.661393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180229550.mount: Deactivated successfully. Sep 12 17:44:04.592855 containerd[1979]: time="2025-09-12T17:44:04.592788886Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:04.595217 containerd[1979]: time="2025-09-12T17:44:04.595151873Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:44:04.598420 containerd[1979]: time="2025-09-12T17:44:04.598371786Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:04.622539 containerd[1979]: time="2025-09-12T17:44:04.622483402Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:04.625398 containerd[1979]: time="2025-09-12T17:44:04.624419238Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.982743704s" Sep 12 17:44:04.625398 containerd[1979]: time="2025-09-12T17:44:04.624499081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:44:04.672978 kubelet[3173]: I0912 17:44:04.670710 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z4cb9" podStartSLOduration=4.6706833660000004 podStartE2EDuration="4.670683366s" podCreationTimestamp="2025-09-12 17:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:44:02.021165857 +0000 UTC m=+8.583805066" watchObservedRunningTime="2025-09-12 17:44:04.670683366 +0000 UTC m=+11.233322575" Sep 12 17:44:04.686538 containerd[1979]: time="2025-09-12T17:44:04.686497390Z" level=info msg="CreateContainer within sandbox \"b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:44:04.719788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019973224.mount: Deactivated successfully. Sep 12 17:44:04.724909 containerd[1979]: time="2025-09-12T17:44:04.724753432Z" level=info msg="CreateContainer within sandbox \"b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd\"" Sep 12 17:44:04.734935 containerd[1979]: time="2025-09-12T17:44:04.734884226Z" level=info msg="StartContainer for \"8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd\"" Sep 12 17:44:04.772543 systemd[1]: Started cri-containerd-8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd.scope - libcontainer container 8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd. Sep 12 17:44:04.831690 containerd[1979]: time="2025-09-12T17:44:04.831520154Z" level=info msg="StartContainer for \"8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd\" returns successfully" Sep 12 17:44:07.625051 update_engine[1967]: I20250912 17:44:07.623072 1967 update_attempter.cc:509] Updating boot flags... Sep 12 17:44:07.738161 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3528) Sep 12 17:44:08.072043 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3533) Sep 12 17:44:12.540963 sudo[2306]: pam_unix(sudo:session): session closed for user root Sep 12 17:44:12.567968 sshd[2303]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:12.576472 systemd[1]: sshd@6-172.31.28.238:22-147.75.109.163:35034.service: Deactivated successfully. Sep 12 17:44:12.581760 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:44:12.582912 systemd[1]: session-7.scope: Consumed 4.912s CPU time, 142.7M memory peak, 0B memory swap peak. Sep 12 17:44:12.586825 systemd-logind[1965]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:44:12.589224 systemd-logind[1965]: Removed session 7. Sep 12 17:44:17.107991 kubelet[3173]: I0912 17:44:17.107918 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-xqr5s" podStartSLOduration=13.094885668 podStartE2EDuration="17.107900077s" podCreationTimestamp="2025-09-12 17:44:00 +0000 UTC" firstStartedPulling="2025-09-12 17:44:00.640895706 +0000 UTC m=+7.203534890" lastFinishedPulling="2025-09-12 17:44:04.653910098 +0000 UTC m=+11.216549299" observedRunningTime="2025-09-12 17:44:05.007833984 +0000 UTC m=+11.570473194" watchObservedRunningTime="2025-09-12 17:44:17.107900077 +0000 UTC m=+23.670539285" Sep 12 17:44:17.115876 kubelet[3173]: W0912 17:44:17.115736 3173 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-28-238" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-238' and this object Sep 12 17:44:17.117713 kubelet[3173]: W0912 17:44:17.115736 3173 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-28-238" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-238' and this object Sep 12 17:44:17.119725 kubelet[3173]: E0912 17:44:17.119069 3173 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-28-238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-238' and this object" logger="UnhandledError" Sep 12 17:44:17.119725 kubelet[3173]: E0912 17:44:17.119069 3173 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-28-238\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-238' and this object" logger="UnhandledError" Sep 12 17:44:17.119725 kubelet[3173]: W0912 17:44:17.119298 3173 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-28-238" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-28-238' and this object Sep 12 17:44:17.119725 kubelet[3173]: E0912 17:44:17.119328 3173 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-28-238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-238' and this object" logger="UnhandledError" Sep 12 17:44:17.121342 systemd[1]: Created slice kubepods-besteffort-pod3518c655_a53d_4288_bfc4_d52154d6f402.slice - libcontainer container kubepods-besteffort-pod3518c655_a53d_4288_bfc4_d52154d6f402.slice. Sep 12 17:44:17.281347 kubelet[3173]: I0912 17:44:17.281235 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3518c655-a53d-4288-bfc4-d52154d6f402-typha-certs\") pod \"calico-typha-6f97854bd9-v5bwm\" (UID: \"3518c655-a53d-4288-bfc4-d52154d6f402\") " pod="calico-system/calico-typha-6f97854bd9-v5bwm" Sep 12 17:44:17.281347 kubelet[3173]: I0912 17:44:17.281290 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3518c655-a53d-4288-bfc4-d52154d6f402-tigera-ca-bundle\") pod \"calico-typha-6f97854bd9-v5bwm\" (UID: \"3518c655-a53d-4288-bfc4-d52154d6f402\") " pod="calico-system/calico-typha-6f97854bd9-v5bwm" Sep 12 17:44:17.281347 kubelet[3173]: I0912 17:44:17.281317 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j22bp\" (UniqueName: \"kubernetes.io/projected/3518c655-a53d-4288-bfc4-d52154d6f402-kube-api-access-j22bp\") pod \"calico-typha-6f97854bd9-v5bwm\" (UID: \"3518c655-a53d-4288-bfc4-d52154d6f402\") " pod="calico-system/calico-typha-6f97854bd9-v5bwm" Sep 12 17:44:17.503945 systemd[1]: Created slice kubepods-besteffort-pod46a84631_422b_4dc1_ad08_ed4ad34121c1.slice - libcontainer container kubepods-besteffort-pod46a84631_422b_4dc1_ad08_ed4ad34121c1.slice. Sep 12 17:44:17.684706 kubelet[3173]: I0912 17:44:17.684560 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46a84631-422b-4dc1-ad08-ed4ad34121c1-tigera-ca-bundle\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684706 kubelet[3173]: I0912 17:44:17.684607 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-cni-log-dir\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684706 kubelet[3173]: I0912 17:44:17.684624 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-flexvol-driver-host\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684706 kubelet[3173]: I0912 17:44:17.684647 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-policysync\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684706 kubelet[3173]: I0912 17:44:17.684663 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-var-run-calico\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684976 kubelet[3173]: I0912 17:44:17.684749 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-cni-bin-dir\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684976 kubelet[3173]: I0912 17:44:17.684786 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-lib-modules\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684976 kubelet[3173]: I0912 17:44:17.684813 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-cni-net-dir\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684976 kubelet[3173]: I0912 17:44:17.684839 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjm2\" (UniqueName: \"kubernetes.io/projected/46a84631-422b-4dc1-ad08-ed4ad34121c1-kube-api-access-jjjm2\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.684976 kubelet[3173]: I0912 17:44:17.684867 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/46a84631-422b-4dc1-ad08-ed4ad34121c1-node-certs\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.685149 kubelet[3173]: I0912 17:44:17.684881 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-var-lib-calico\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.685149 kubelet[3173]: I0912 17:44:17.684901 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46a84631-422b-4dc1-ad08-ed4ad34121c1-xtables-lock\") pod \"calico-node-bcrbv\" (UID: \"46a84631-422b-4dc1-ad08-ed4ad34121c1\") " pod="calico-system/calico-node-bcrbv" Sep 12 17:44:17.733975 kubelet[3173]: E0912 17:44:17.733617 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:17.804308 kubelet[3173]: E0912 17:44:17.803840 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.804308 kubelet[3173]: W0912 17:44:17.803865 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.804308 kubelet[3173]: E0912 17:44:17.803888 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.887527 kubelet[3173]: E0912 17:44:17.887402 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.887527 kubelet[3173]: W0912 17:44:17.887432 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.887527 kubelet[3173]: E0912 17:44:17.887457 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.887527 kubelet[3173]: I0912 17:44:17.887505 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ee7b516d-8c24-4672-9d6b-95b402198b55-varrun\") pod \"csi-node-driver-b4wvz\" (UID: \"ee7b516d-8c24-4672-9d6b-95b402198b55\") " pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:17.887852 kubelet[3173]: E0912 17:44:17.887833 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.887852 kubelet[3173]: W0912 17:44:17.887846 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.887939 kubelet[3173]: E0912 17:44:17.887877 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.889255 kubelet[3173]: E0912 17:44:17.888166 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.889255 kubelet[3173]: W0912 17:44:17.888180 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.889255 kubelet[3173]: E0912 17:44:17.889066 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.889255 kubelet[3173]: E0912 17:44:17.889149 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.889255 kubelet[3173]: W0912 17:44:17.889161 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.889255 kubelet[3173]: E0912 17:44:17.889173 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.889625 kubelet[3173]: E0912 17:44:17.889420 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.889625 kubelet[3173]: W0912 17:44:17.889431 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.889625 kubelet[3173]: E0912 17:44:17.889443 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.889766 kubelet[3173]: E0912 17:44:17.889665 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.889766 kubelet[3173]: W0912 17:44:17.889675 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.889766 kubelet[3173]: E0912 17:44:17.889687 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.889766 kubelet[3173]: I0912 17:44:17.889730 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b516d-8c24-4672-9d6b-95b402198b55-socket-dir\") pod \"csi-node-driver-b4wvz\" (UID: \"ee7b516d-8c24-4672-9d6b-95b402198b55\") " pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:17.889993 kubelet[3173]: E0912 17:44:17.889972 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.889993 kubelet[3173]: W0912 17:44:17.889993 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.890121 kubelet[3173]: E0912 17:44:17.890015 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.890121 kubelet[3173]: I0912 17:44:17.890099 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r87m7\" (UniqueName: \"kubernetes.io/projected/ee7b516d-8c24-4672-9d6b-95b402198b55-kube-api-access-r87m7\") pod \"csi-node-driver-b4wvz\" (UID: \"ee7b516d-8c24-4672-9d6b-95b402198b55\") " pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:17.890328 kubelet[3173]: E0912 17:44:17.890315 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.890328 kubelet[3173]: W0912 17:44:17.890327 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.890428 kubelet[3173]: E0912 17:44:17.890365 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.890627 kubelet[3173]: E0912 17:44:17.890610 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.890627 kubelet[3173]: W0912 17:44:17.890627 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.890748 kubelet[3173]: E0912 17:44:17.890652 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.890898 kubelet[3173]: E0912 17:44:17.890882 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.890944 kubelet[3173]: W0912 17:44:17.890899 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.890986 kubelet[3173]: E0912 17:44:17.890975 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.892055 kubelet[3173]: E0912 17:44:17.891942 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.892055 kubelet[3173]: W0912 17:44:17.891959 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.892055 kubelet[3173]: E0912 17:44:17.891982 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.892055 kubelet[3173]: I0912 17:44:17.892007 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b516d-8c24-4672-9d6b-95b402198b55-registration-dir\") pod \"csi-node-driver-b4wvz\" (UID: \"ee7b516d-8c24-4672-9d6b-95b402198b55\") " pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:17.892734 kubelet[3173]: E0912 17:44:17.892316 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.892734 kubelet[3173]: W0912 17:44:17.892334 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.892734 kubelet[3173]: E0912 17:44:17.892354 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.892734 kubelet[3173]: E0912 17:44:17.892616 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.892734 kubelet[3173]: W0912 17:44:17.892627 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.892734 kubelet[3173]: E0912 17:44:17.892651 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.893065 kubelet[3173]: E0912 17:44:17.892886 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.893065 kubelet[3173]: W0912 17:44:17.892897 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.893065 kubelet[3173]: E0912 17:44:17.892914 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.893208 kubelet[3173]: E0912 17:44:17.893171 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.893208 kubelet[3173]: W0912 17:44:17.893181 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.893304 kubelet[3173]: E0912 17:44:17.893207 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.895038 kubelet[3173]: E0912 17:44:17.893442 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.895038 kubelet[3173]: W0912 17:44:17.893455 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.895038 kubelet[3173]: E0912 17:44:17.893567 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.895038 kubelet[3173]: I0912 17:44:17.893594 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b516d-8c24-4672-9d6b-95b402198b55-kubelet-dir\") pod \"csi-node-driver-b4wvz\" (UID: \"ee7b516d-8c24-4672-9d6b-95b402198b55\") " pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:17.895038 kubelet[3173]: E0912 17:44:17.893776 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.895038 kubelet[3173]: W0912 17:44:17.893785 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.895038 kubelet[3173]: E0912 17:44:17.893801 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.895038 kubelet[3173]: E0912 17:44:17.894229 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.895038 kubelet[3173]: W0912 17:44:17.894241 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.895433 kubelet[3173]: E0912 17:44:17.894268 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.895433 kubelet[3173]: E0912 17:44:17.894495 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.895433 kubelet[3173]: W0912 17:44:17.894505 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.895433 kubelet[3173]: E0912 17:44:17.894517 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.895433 kubelet[3173]: E0912 17:44:17.894859 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.895433 kubelet[3173]: W0912 17:44:17.894871 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.895433 kubelet[3173]: E0912 17:44:17.894883 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.995605 kubelet[3173]: E0912 17:44:17.995290 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.995605 kubelet[3173]: W0912 17:44:17.995327 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.995605 kubelet[3173]: E0912 17:44:17.995356 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.996591 kubelet[3173]: E0912 17:44:17.996547 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.996591 kubelet[3173]: W0912 17:44:17.996570 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.996888 kubelet[3173]: E0912 17:44:17.996595 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.997283 kubelet[3173]: E0912 17:44:17.997264 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.997552 kubelet[3173]: W0912 17:44:17.997385 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.997552 kubelet[3173]: E0912 17:44:17.997414 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.997876 kubelet[3173]: E0912 17:44:17.997860 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.998052 kubelet[3173]: W0912 17:44:17.997960 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.998052 kubelet[3173]: E0912 17:44:17.997996 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.998411 kubelet[3173]: E0912 17:44:17.998340 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.998411 kubelet[3173]: W0912 17:44:17.998387 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.999159 kubelet[3173]: E0912 17:44:17.998411 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.999159 kubelet[3173]: E0912 17:44:17.998685 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.999159 kubelet[3173]: W0912 17:44:17.998699 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.999159 kubelet[3173]: E0912 17:44:17.998778 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.999353 kubelet[3173]: E0912 17:44:17.999217 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.999353 kubelet[3173]: W0912 17:44:17.999295 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:17.999437 kubelet[3173]: E0912 17:44:17.999351 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:17.999765 kubelet[3173]: E0912 17:44:17.999746 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:17.999864 kubelet[3173]: W0912 17:44:17.999764 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.000195 kubelet[3173]: E0912 17:44:18.000016 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.000252 kubelet[3173]: E0912 17:44:18.000195 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.000252 kubelet[3173]: W0912 17:44:18.000207 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.000252 kubelet[3173]: E0912 17:44:18.000229 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.000540 kubelet[3173]: E0912 17:44:18.000521 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.000540 kubelet[3173]: W0912 17:44:18.000539 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.000935 kubelet[3173]: E0912 17:44:18.000575 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.000935 kubelet[3173]: E0912 17:44:18.000834 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.000935 kubelet[3173]: W0912 17:44:18.000846 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.000935 kubelet[3173]: E0912 17:44:18.000873 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.001438 kubelet[3173]: E0912 17:44:18.001124 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.001438 kubelet[3173]: W0912 17:44:18.001136 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.001438 kubelet[3173]: E0912 17:44:18.001153 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.002312 kubelet[3173]: E0912 17:44:18.002292 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.002312 kubelet[3173]: W0912 17:44:18.002309 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.002432 kubelet[3173]: E0912 17:44:18.002332 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.003059 kubelet[3173]: E0912 17:44:18.002737 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.003059 kubelet[3173]: W0912 17:44:18.002754 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.003059 kubelet[3173]: E0912 17:44:18.002785 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.003236 kubelet[3173]: E0912 17:44:18.003077 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.003236 kubelet[3173]: W0912 17:44:18.003090 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.003236 kubelet[3173]: E0912 17:44:18.003150 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.003373 kubelet[3173]: E0912 17:44:18.003340 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.003373 kubelet[3173]: W0912 17:44:18.003351 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.003469 kubelet[3173]: E0912 17:44:18.003433 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.004094 kubelet[3173]: E0912 17:44:18.003750 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.004094 kubelet[3173]: W0912 17:44:18.003766 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.004094 kubelet[3173]: E0912 17:44:18.003849 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.004253 kubelet[3173]: E0912 17:44:18.004114 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.004253 kubelet[3173]: W0912 17:44:18.004126 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.004360 kubelet[3173]: E0912 17:44:18.004250 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.004466 kubelet[3173]: E0912 17:44:18.004410 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.004466 kubelet[3173]: W0912 17:44:18.004421 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.004466 kubelet[3173]: E0912 17:44:18.004451 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.007057 kubelet[3173]: E0912 17:44:18.006526 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.007057 kubelet[3173]: W0912 17:44:18.006543 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.007057 kubelet[3173]: E0912 17:44:18.006563 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.007057 kubelet[3173]: E0912 17:44:18.006812 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.007057 kubelet[3173]: W0912 17:44:18.006823 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.007057 kubelet[3173]: E0912 17:44:18.006904 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.007376 kubelet[3173]: E0912 17:44:18.007179 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.007376 kubelet[3173]: W0912 17:44:18.007193 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.007376 kubelet[3173]: E0912 17:44:18.007279 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.009521 kubelet[3173]: E0912 17:44:18.008479 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.009521 kubelet[3173]: W0912 17:44:18.008495 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.009521 kubelet[3173]: E0912 17:44:18.008722 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.009521 kubelet[3173]: W0912 17:44:18.008736 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.009521 kubelet[3173]: E0912 17:44:18.008993 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.009521 kubelet[3173]: W0912 17:44:18.009006 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.009521 kubelet[3173]: E0912 17:44:18.009075 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.009521 kubelet[3173]: E0912 17:44:18.009110 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.010379 kubelet[3173]: E0912 17:44:18.010003 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.010379 kubelet[3173]: W0912 17:44:18.010032 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.010379 kubelet[3173]: E0912 17:44:18.010058 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.011036 kubelet[3173]: E0912 17:44:18.011004 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.011119 kubelet[3173]: W0912 17:44:18.011062 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.011119 kubelet[3173]: E0912 17:44:18.011083 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.012494 kubelet[3173]: E0912 17:44:18.012187 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.012494 kubelet[3173]: W0912 17:44:18.012205 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.012494 kubelet[3173]: E0912 17:44:18.012221 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.012494 kubelet[3173]: E0912 17:44:18.012262 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.012494 kubelet[3173]: E0912 17:44:18.012472 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.012494 kubelet[3173]: W0912 17:44:18.012482 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.012494 kubelet[3173]: E0912 17:44:18.012494 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.013012 kubelet[3173]: E0912 17:44:18.012760 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.013012 kubelet[3173]: W0912 17:44:18.012771 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.013012 kubelet[3173]: E0912 17:44:18.012785 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.021179 kubelet[3173]: E0912 17:44:18.019787 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.021179 kubelet[3173]: W0912 17:44:18.019821 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.021179 kubelet[3173]: E0912 17:44:18.019854 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.021179 kubelet[3173]: E0912 17:44:18.020674 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.021179 kubelet[3173]: W0912 17:44:18.020690 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.021179 kubelet[3173]: E0912 17:44:18.020711 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.026553 kubelet[3173]: E0912 17:44:18.026350 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.026553 kubelet[3173]: W0912 17:44:18.026373 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.026553 kubelet[3173]: E0912 17:44:18.026397 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.108556 kubelet[3173]: E0912 17:44:18.108453 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.108556 kubelet[3173]: W0912 17:44:18.108480 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.108556 kubelet[3173]: E0912 17:44:18.108502 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.109381 kubelet[3173]: E0912 17:44:18.109290 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.109381 kubelet[3173]: W0912 17:44:18.109306 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.109381 kubelet[3173]: E0912 17:44:18.109321 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.109727 kubelet[3173]: E0912 17:44:18.109663 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.109727 kubelet[3173]: W0912 17:44:18.109674 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.109727 kubelet[3173]: E0912 17:44:18.109686 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.144744 kubelet[3173]: E0912 17:44:18.144714 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.144744 kubelet[3173]: W0912 17:44:18.144740 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.144880 kubelet[3173]: E0912 17:44:18.144759 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.147841 kubelet[3173]: E0912 17:44:18.147790 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.147841 kubelet[3173]: W0912 17:44:18.147815 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.147841 kubelet[3173]: E0912 17:44:18.147836 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.210787 kubelet[3173]: E0912 17:44:18.210736 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.210787 kubelet[3173]: W0912 17:44:18.210762 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.210787 kubelet[3173]: E0912 17:44:18.210784 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.259049 kubelet[3173]: E0912 17:44:18.258095 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:18.259049 kubelet[3173]: W0912 17:44:18.258132 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:18.259049 kubelet[3173]: E0912 17:44:18.258156 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:18.329292 containerd[1979]: time="2025-09-12T17:44:18.329245951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f97854bd9-v5bwm,Uid:3518c655-a53d-4288-bfc4-d52154d6f402,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:18.363203 containerd[1979]: time="2025-09-12T17:44:18.363002728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:18.363951 containerd[1979]: time="2025-09-12T17:44:18.363196808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:18.363951 containerd[1979]: time="2025-09-12T17:44:18.363776491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:18.363951 containerd[1979]: time="2025-09-12T17:44:18.363873032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:18.413222 systemd[1]: Started cri-containerd-5d74e7c3f8b36d91597686c1254d8a94fca68cfcf4eddaeb988add2bb55e9c98.scope - libcontainer container 5d74e7c3f8b36d91597686c1254d8a94fca68cfcf4eddaeb988add2bb55e9c98. Sep 12 17:44:18.414886 containerd[1979]: time="2025-09-12T17:44:18.414620706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bcrbv,Uid:46a84631-422b-4dc1-ad08-ed4ad34121c1,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:18.473731 containerd[1979]: time="2025-09-12T17:44:18.472233675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:18.473731 containerd[1979]: time="2025-09-12T17:44:18.472332303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:18.473731 containerd[1979]: time="2025-09-12T17:44:18.472353836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:18.473731 containerd[1979]: time="2025-09-12T17:44:18.472488699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:18.494073 containerd[1979]: time="2025-09-12T17:44:18.493801336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f97854bd9-v5bwm,Uid:3518c655-a53d-4288-bfc4-d52154d6f402,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d74e7c3f8b36d91597686c1254d8a94fca68cfcf4eddaeb988add2bb55e9c98\"" Sep 12 17:44:18.497712 containerd[1979]: time="2025-09-12T17:44:18.497410839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:44:18.515250 systemd[1]: Started cri-containerd-f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329.scope - libcontainer container f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329. Sep 12 17:44:18.548441 containerd[1979]: time="2025-09-12T17:44:18.548163421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bcrbv,Uid:46a84631-422b-4dc1-ad08-ed4ad34121c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\"" Sep 12 17:44:19.563175 kubelet[3173]: E0912 17:44:19.563075 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:19.839139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3441442400.mount: Deactivated successfully. Sep 12 17:44:20.929648 containerd[1979]: time="2025-09-12T17:44:20.929546227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:20.930669 containerd[1979]: time="2025-09-12T17:44:20.930558777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:44:20.932043 containerd[1979]: time="2025-09-12T17:44:20.931683642Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:20.933847 containerd[1979]: time="2025-09-12T17:44:20.933810981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:20.934576 containerd[1979]: time="2025-09-12T17:44:20.934544256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.437088726s" Sep 12 17:44:20.934703 containerd[1979]: time="2025-09-12T17:44:20.934682678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:44:20.936631 containerd[1979]: time="2025-09-12T17:44:20.936603393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:44:20.951070 containerd[1979]: time="2025-09-12T17:44:20.951009369Z" level=info msg="CreateContainer within sandbox \"5d74e7c3f8b36d91597686c1254d8a94fca68cfcf4eddaeb988add2bb55e9c98\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:44:20.992619 containerd[1979]: time="2025-09-12T17:44:20.992577736Z" level=info msg="CreateContainer within sandbox \"5d74e7c3f8b36d91597686c1254d8a94fca68cfcf4eddaeb988add2bb55e9c98\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bb957ae98f7a706200618c0b6c67a652d045e17d6fbb55d5990ee6c0cff43f85\"" Sep 12 17:44:20.993727 containerd[1979]: time="2025-09-12T17:44:20.993695748Z" level=info msg="StartContainer for \"bb957ae98f7a706200618c0b6c67a652d045e17d6fbb55d5990ee6c0cff43f85\"" Sep 12 17:44:21.079287 systemd[1]: Started cri-containerd-bb957ae98f7a706200618c0b6c67a652d045e17d6fbb55d5990ee6c0cff43f85.scope - libcontainer container bb957ae98f7a706200618c0b6c67a652d045e17d6fbb55d5990ee6c0cff43f85. Sep 12 17:44:21.141246 containerd[1979]: time="2025-09-12T17:44:21.141193340Z" level=info msg="StartContainer for \"bb957ae98f7a706200618c0b6c67a652d045e17d6fbb55d5990ee6c0cff43f85\" returns successfully" Sep 12 17:44:21.563560 kubelet[3173]: E0912 17:44:21.563498 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:22.067008 kubelet[3173]: I0912 17:44:22.066947 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f97854bd9-v5bwm" podStartSLOduration=2.627898928 podStartE2EDuration="5.066931906s" podCreationTimestamp="2025-09-12 17:44:17 +0000 UTC" firstStartedPulling="2025-09-12 17:44:18.496739791 +0000 UTC m=+25.059378989" lastFinishedPulling="2025-09-12 17:44:20.935772764 +0000 UTC m=+27.498411967" observedRunningTime="2025-09-12 17:44:22.066608813 +0000 UTC m=+28.629248019" watchObservedRunningTime="2025-09-12 17:44:22.066931906 +0000 UTC m=+28.629571139" Sep 12 17:44:22.124056 kubelet[3173]: E0912 17:44:22.123174 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.124056 kubelet[3173]: W0912 17:44:22.123205 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.124056 kubelet[3173]: E0912 17:44:22.123253 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.124056 kubelet[3173]: E0912 17:44:22.123760 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.124056 kubelet[3173]: W0912 17:44:22.123796 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.124056 kubelet[3173]: E0912 17:44:22.123813 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.124470 kubelet[3173]: E0912 17:44:22.124097 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.124470 kubelet[3173]: W0912 17:44:22.124138 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.124470 kubelet[3173]: E0912 17:44:22.124157 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.124470 kubelet[3173]: E0912 17:44:22.124422 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.124645 kubelet[3173]: W0912 17:44:22.124433 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.124645 kubelet[3173]: E0912 17:44:22.124606 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.126300 kubelet[3173]: E0912 17:44:22.125791 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.126300 kubelet[3173]: W0912 17:44:22.125874 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.126300 kubelet[3173]: E0912 17:44:22.126082 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.127821 kubelet[3173]: E0912 17:44:22.127373 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.127821 kubelet[3173]: W0912 17:44:22.127408 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.127821 kubelet[3173]: E0912 17:44:22.127586 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.128211 kubelet[3173]: E0912 17:44:22.128068 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.128211 kubelet[3173]: W0912 17:44:22.128100 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.128211 kubelet[3173]: E0912 17:44:22.128116 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.128552 kubelet[3173]: E0912 17:44:22.128353 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.128552 kubelet[3173]: W0912 17:44:22.128365 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.128552 kubelet[3173]: E0912 17:44:22.128379 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.129010 kubelet[3173]: E0912 17:44:22.128627 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.129010 kubelet[3173]: W0912 17:44:22.128640 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.129010 kubelet[3173]: E0912 17:44:22.128653 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.129010 kubelet[3173]: E0912 17:44:22.128863 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.129010 kubelet[3173]: W0912 17:44:22.128873 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.129010 kubelet[3173]: E0912 17:44:22.128885 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129100 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.130285 kubelet[3173]: W0912 17:44:22.129110 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129123 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129380 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.130285 kubelet[3173]: W0912 17:44:22.129392 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129406 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129646 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.130285 kubelet[3173]: W0912 17:44:22.129657 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129671 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.130285 kubelet[3173]: E0912 17:44:22.129876 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.130551 kubelet[3173]: W0912 17:44:22.129887 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.130551 kubelet[3173]: E0912 17:44:22.129900 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.130551 kubelet[3173]: E0912 17:44:22.130152 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.130551 kubelet[3173]: W0912 17:44:22.130162 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.130551 kubelet[3173]: E0912 17:44:22.130175 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.140919 kubelet[3173]: E0912 17:44:22.140600 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.140919 kubelet[3173]: W0912 17:44:22.140625 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.140919 kubelet[3173]: E0912 17:44:22.140654 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.140919 kubelet[3173]: E0912 17:44:22.140900 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.140919 kubelet[3173]: W0912 17:44:22.140910 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.140919 kubelet[3173]: E0912 17:44:22.140922 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.141319 kubelet[3173]: E0912 17:44:22.141217 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.141319 kubelet[3173]: W0912 17:44:22.141229 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.141319 kubelet[3173]: E0912 17:44:22.141259 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.141590 kubelet[3173]: E0912 17:44:22.141567 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.141590 kubelet[3173]: W0912 17:44:22.141586 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.141788 kubelet[3173]: E0912 17:44:22.141607 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.141870 kubelet[3173]: E0912 17:44:22.141844 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.141870 kubelet[3173]: W0912 17:44:22.141855 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.142130 kubelet[3173]: E0912 17:44:22.141873 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.142130 kubelet[3173]: E0912 17:44:22.142118 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.142234 kubelet[3173]: W0912 17:44:22.142129 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.142234 kubelet[3173]: E0912 17:44:22.142148 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.142457 kubelet[3173]: E0912 17:44:22.142438 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.142457 kubelet[3173]: W0912 17:44:22.142453 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.142890 kubelet[3173]: E0912 17:44:22.142600 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.142890 kubelet[3173]: E0912 17:44:22.142679 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.142890 kubelet[3173]: W0912 17:44:22.142688 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.142890 kubelet[3173]: E0912 17:44:22.142857 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.142890 kubelet[3173]: E0912 17:44:22.142889 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.143228 kubelet[3173]: W0912 17:44:22.142898 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.143228 kubelet[3173]: E0912 17:44:22.142928 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.143228 kubelet[3173]: E0912 17:44:22.143171 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.143228 kubelet[3173]: W0912 17:44:22.143182 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.143228 kubelet[3173]: E0912 17:44:22.143209 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.143476 kubelet[3173]: E0912 17:44:22.143413 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.143476 kubelet[3173]: W0912 17:44:22.143423 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.143476 kubelet[3173]: E0912 17:44:22.143448 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.143795 kubelet[3173]: E0912 17:44:22.143766 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.143795 kubelet[3173]: W0912 17:44:22.143778 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.143888 kubelet[3173]: E0912 17:44:22.143806 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.144224 kubelet[3173]: E0912 17:44:22.144206 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.144224 kubelet[3173]: W0912 17:44:22.144222 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.144387 kubelet[3173]: E0912 17:44:22.144240 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.144477 kubelet[3173]: E0912 17:44:22.144460 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.144477 kubelet[3173]: W0912 17:44:22.144474 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.144615 kubelet[3173]: E0912 17:44:22.144502 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.144738 kubelet[3173]: E0912 17:44:22.144722 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.144788 kubelet[3173]: W0912 17:44:22.144737 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.144788 kubelet[3173]: E0912 17:44:22.144755 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.145047 kubelet[3173]: E0912 17:44:22.145008 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.145047 kubelet[3173]: W0912 17:44:22.145035 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.145181 kubelet[3173]: E0912 17:44:22.145055 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.146251 kubelet[3173]: E0912 17:44:22.145713 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.146251 kubelet[3173]: W0912 17:44:22.145728 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.146251 kubelet[3173]: E0912 17:44:22.145919 3173 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:44:22.146251 kubelet[3173]: W0912 17:44:22.145973 3173 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:44:22.146251 kubelet[3173]: E0912 17:44:22.146083 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.146251 kubelet[3173]: E0912 17:44:22.146111 3173 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:44:22.234717 containerd[1979]: time="2025-09-12T17:44:22.234666171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:22.236491 containerd[1979]: time="2025-09-12T17:44:22.236333057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:44:22.238732 containerd[1979]: time="2025-09-12T17:44:22.238443606Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:22.242137 containerd[1979]: time="2025-09-12T17:44:22.242101341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:22.242573 containerd[1979]: time="2025-09-12T17:44:22.242542228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.30590227s" Sep 12 17:44:22.242656 containerd[1979]: time="2025-09-12T17:44:22.242578569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:44:22.245184 containerd[1979]: time="2025-09-12T17:44:22.245138019Z" level=info msg="CreateContainer within sandbox \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:44:22.272010 containerd[1979]: time="2025-09-12T17:44:22.271958098Z" level=info msg="CreateContainer within sandbox \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81\"" Sep 12 17:44:22.272749 containerd[1979]: time="2025-09-12T17:44:22.272693121Z" level=info msg="StartContainer for \"12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81\"" Sep 12 17:44:22.308333 systemd[1]: Started cri-containerd-12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81.scope - libcontainer container 12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81. Sep 12 17:44:22.344648 containerd[1979]: time="2025-09-12T17:44:22.344538765Z" level=info msg="StartContainer for \"12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81\" returns successfully" Sep 12 17:44:22.363286 systemd[1]: cri-containerd-12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81.scope: Deactivated successfully. Sep 12 17:44:22.390480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81-rootfs.mount: Deactivated successfully. Sep 12 17:44:22.462763 containerd[1979]: time="2025-09-12T17:44:22.426335713Z" level=info msg="shim disconnected" id=12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81 namespace=k8s.io Sep 12 17:44:22.462763 containerd[1979]: time="2025-09-12T17:44:22.462564644Z" level=warning msg="cleaning up after shim disconnected" id=12b0856f2667222910f454e1b69e1adae53a36148a8e6547823c45f08bf09e81 namespace=k8s.io Sep 12 17:44:22.462763 containerd[1979]: time="2025-09-12T17:44:22.462581695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:44:23.056916 kubelet[3173]: I0912 17:44:23.056366 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:44:23.059110 containerd[1979]: time="2025-09-12T17:44:23.057978400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:44:23.562502 kubelet[3173]: E0912 17:44:23.561695 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:25.563682 kubelet[3173]: E0912 17:44:25.563603 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:26.477944 containerd[1979]: time="2025-09-12T17:44:26.477896307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:26.482644 containerd[1979]: time="2025-09-12T17:44:26.482581665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:44:26.485058 containerd[1979]: time="2025-09-12T17:44:26.484968606Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:26.488692 containerd[1979]: time="2025-09-12T17:44:26.488648592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:26.489956 containerd[1979]: time="2025-09-12T17:44:26.489372794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.43135973s" Sep 12 17:44:26.489956 containerd[1979]: time="2025-09-12T17:44:26.489414390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:44:26.492397 containerd[1979]: time="2025-09-12T17:44:26.492356152Z" level=info msg="CreateContainer within sandbox \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:44:26.537609 containerd[1979]: time="2025-09-12T17:44:26.537559593Z" level=info msg="CreateContainer within sandbox \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655\"" Sep 12 17:44:26.539878 containerd[1979]: time="2025-09-12T17:44:26.538381182Z" level=info msg="StartContainer for \"9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655\"" Sep 12 17:44:26.585265 systemd[1]: Started cri-containerd-9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655.scope - libcontainer container 9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655. Sep 12 17:44:26.627178 containerd[1979]: time="2025-09-12T17:44:26.627129517Z" level=info msg="StartContainer for \"9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655\" returns successfully" Sep 12 17:44:27.532479 systemd[1]: cri-containerd-9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655.scope: Deactivated successfully. Sep 12 17:44:27.562901 kubelet[3173]: E0912 17:44:27.562813 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:27.581973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655-rootfs.mount: Deactivated successfully. Sep 12 17:44:27.599390 containerd[1979]: time="2025-09-12T17:44:27.599327369Z" level=info msg="shim disconnected" id=9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655 namespace=k8s.io Sep 12 17:44:27.599390 containerd[1979]: time="2025-09-12T17:44:27.599383629Z" level=warning msg="cleaning up after shim disconnected" id=9a0de6288f8872cac349b9b4ec1b4c4e2f2d1de8b844ff0a3d0ac1d20aec2655 namespace=k8s.io Sep 12 17:44:27.599390 containerd[1979]: time="2025-09-12T17:44:27.599398410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:44:27.623434 kubelet[3173]: I0912 17:44:27.622732 3173 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:44:27.682383 kubelet[3173]: I0912 17:44:27.682345 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dce169d5-8166-4cc4-9317-2154c59c4245-config-volume\") pod \"coredns-7c65d6cfc9-rww8b\" (UID: \"dce169d5-8166-4cc4-9317-2154c59c4245\") " pod="kube-system/coredns-7c65d6cfc9-rww8b" Sep 12 17:44:27.682529 kubelet[3173]: I0912 17:44:27.682404 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngcq\" (UniqueName: \"kubernetes.io/projected/dce169d5-8166-4cc4-9317-2154c59c4245-kube-api-access-5ngcq\") pod \"coredns-7c65d6cfc9-rww8b\" (UID: \"dce169d5-8166-4cc4-9317-2154c59c4245\") " pod="kube-system/coredns-7c65d6cfc9-rww8b" Sep 12 17:44:27.682529 kubelet[3173]: I0912 17:44:27.682453 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ht5\" (UniqueName: \"kubernetes.io/projected/9ec97501-8553-450b-a6ff-0411fa4b5fba-kube-api-access-97ht5\") pod \"coredns-7c65d6cfc9-xd7b8\" (UID: \"9ec97501-8553-450b-a6ff-0411fa4b5fba\") " pod="kube-system/coredns-7c65d6cfc9-xd7b8" Sep 12 17:44:27.682529 kubelet[3173]: I0912 17:44:27.682487 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ec97501-8553-450b-a6ff-0411fa4b5fba-config-volume\") pod \"coredns-7c65d6cfc9-xd7b8\" (UID: \"9ec97501-8553-450b-a6ff-0411fa4b5fba\") " pod="kube-system/coredns-7c65d6cfc9-xd7b8" Sep 12 17:44:27.687710 systemd[1]: Created slice kubepods-burstable-pod9ec97501_8553_450b_a6ff_0411fa4b5fba.slice - libcontainer container kubepods-burstable-pod9ec97501_8553_450b_a6ff_0411fa4b5fba.slice. Sep 12 17:44:27.711939 systemd[1]: Created slice kubepods-burstable-poddce169d5_8166_4cc4_9317_2154c59c4245.slice - libcontainer container kubepods-burstable-poddce169d5_8166_4cc4_9317_2154c59c4245.slice. Sep 12 17:44:27.722693 systemd[1]: Created slice kubepods-besteffort-pod92a98262_021b_493d_b45d_176a655b449a.slice - libcontainer container kubepods-besteffort-pod92a98262_021b_493d_b45d_176a655b449a.slice. Sep 12 17:44:27.734033 systemd[1]: Created slice kubepods-besteffort-podae71cbe7_247e_4add_9e62_3d52dac5dc6c.slice - libcontainer container kubepods-besteffort-podae71cbe7_247e_4add_9e62_3d52dac5dc6c.slice. Sep 12 17:44:27.743194 systemd[1]: Created slice kubepods-besteffort-podf3d2640f_4a08_48ab_aceb_a50afe09f769.slice - libcontainer container kubepods-besteffort-podf3d2640f_4a08_48ab_aceb_a50afe09f769.slice. Sep 12 17:44:27.755130 systemd[1]: Created slice kubepods-besteffort-podd4dc518c_dc79_425e_b59d_e5f73f27e330.slice - libcontainer container kubepods-besteffort-podd4dc518c_dc79_425e_b59d_e5f73f27e330.slice. Sep 12 17:44:27.764674 systemd[1]: Created slice kubepods-besteffort-podc89a2a04_b2de_4ae0_95af_29cde938d697.slice - libcontainer container kubepods-besteffort-podc89a2a04_b2de_4ae0_95af_29cde938d697.slice. Sep 12 17:44:27.784438 kubelet[3173]: I0912 17:44:27.783715 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rw98\" (UniqueName: \"kubernetes.io/projected/92a98262-021b-493d-b45d-176a655b449a-kube-api-access-6rw98\") pod \"whisker-7d78848566-lvdjs\" (UID: \"92a98262-021b-493d-b45d-176a655b449a\") " pod="calico-system/whisker-7d78848566-lvdjs" Sep 12 17:44:27.784438 kubelet[3173]: I0912 17:44:27.783761 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3d2640f-4a08-48ab-aceb-a50afe09f769-config\") pod \"goldmane-7988f88666-wm2jb\" (UID: \"f3d2640f-4a08-48ab-aceb-a50afe09f769\") " pod="calico-system/goldmane-7988f88666-wm2jb" Sep 12 17:44:27.784438 kubelet[3173]: I0912 17:44:27.783782 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4dc518c-dc79-425e-b59d-e5f73f27e330-calico-apiserver-certs\") pod \"calico-apiserver-59f8688b7d-vc7r8\" (UID: \"d4dc518c-dc79-425e-b59d-e5f73f27e330\") " pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" Sep 12 17:44:27.784438 kubelet[3173]: I0912 17:44:27.783828 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm47b\" (UniqueName: \"kubernetes.io/projected/c89a2a04-b2de-4ae0-95af-29cde938d697-kube-api-access-qm47b\") pod \"calico-apiserver-59f8688b7d-xx7p7\" (UID: \"c89a2a04-b2de-4ae0-95af-29cde938d697\") " pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" Sep 12 17:44:27.784438 kubelet[3173]: I0912 17:44:27.783844 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae71cbe7-247e-4add-9e62-3d52dac5dc6c-tigera-ca-bundle\") pod \"calico-kube-controllers-88b68d46-gcsq2\" (UID: \"ae71cbe7-247e-4add-9e62-3d52dac5dc6c\") " pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" Sep 12 17:44:27.784719 kubelet[3173]: I0912 17:44:27.783863 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d2640f-4a08-48ab-aceb-a50afe09f769-goldmane-ca-bundle\") pod \"goldmane-7988f88666-wm2jb\" (UID: \"f3d2640f-4a08-48ab-aceb-a50afe09f769\") " pod="calico-system/goldmane-7988f88666-wm2jb" Sep 12 17:44:27.784719 kubelet[3173]: I0912 17:44:27.783879 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92a98262-021b-493d-b45d-176a655b449a-whisker-backend-key-pair\") pod \"whisker-7d78848566-lvdjs\" (UID: \"92a98262-021b-493d-b45d-176a655b449a\") " pod="calico-system/whisker-7d78848566-lvdjs" Sep 12 17:44:27.784719 kubelet[3173]: I0912 17:44:27.783907 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qrs\" (UniqueName: \"kubernetes.io/projected/d4dc518c-dc79-425e-b59d-e5f73f27e330-kube-api-access-z6qrs\") pod \"calico-apiserver-59f8688b7d-vc7r8\" (UID: \"d4dc518c-dc79-425e-b59d-e5f73f27e330\") " pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" Sep 12 17:44:27.784719 kubelet[3173]: I0912 17:44:27.783923 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a98262-021b-493d-b45d-176a655b449a-whisker-ca-bundle\") pod \"whisker-7d78848566-lvdjs\" (UID: \"92a98262-021b-493d-b45d-176a655b449a\") " pod="calico-system/whisker-7d78848566-lvdjs" Sep 12 17:44:27.784719 kubelet[3173]: I0912 17:44:27.783938 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjj8\" (UniqueName: \"kubernetes.io/projected/f3d2640f-4a08-48ab-aceb-a50afe09f769-kube-api-access-pmjj8\") pod \"goldmane-7988f88666-wm2jb\" (UID: \"f3d2640f-4a08-48ab-aceb-a50afe09f769\") " pod="calico-system/goldmane-7988f88666-wm2jb" Sep 12 17:44:27.784852 kubelet[3173]: I0912 17:44:27.783955 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c89a2a04-b2de-4ae0-95af-29cde938d697-calico-apiserver-certs\") pod \"calico-apiserver-59f8688b7d-xx7p7\" (UID: \"c89a2a04-b2de-4ae0-95af-29cde938d697\") " pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" Sep 12 17:44:27.784852 kubelet[3173]: I0912 17:44:27.783982 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f3d2640f-4a08-48ab-aceb-a50afe09f769-goldmane-key-pair\") pod \"goldmane-7988f88666-wm2jb\" (UID: \"f3d2640f-4a08-48ab-aceb-a50afe09f769\") " pod="calico-system/goldmane-7988f88666-wm2jb" Sep 12 17:44:27.784852 kubelet[3173]: I0912 17:44:27.783997 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8k86\" (UniqueName: \"kubernetes.io/projected/ae71cbe7-247e-4add-9e62-3d52dac5dc6c-kube-api-access-m8k86\") pod \"calico-kube-controllers-88b68d46-gcsq2\" (UID: \"ae71cbe7-247e-4add-9e62-3d52dac5dc6c\") " pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" Sep 12 17:44:28.013073 containerd[1979]: time="2025-09-12T17:44:28.012992600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xd7b8,Uid:9ec97501-8553-450b-a6ff-0411fa4b5fba,Namespace:kube-system,Attempt:0,}" Sep 12 17:44:28.017705 containerd[1979]: time="2025-09-12T17:44:28.017464924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rww8b,Uid:dce169d5-8166-4cc4-9317-2154c59c4245,Namespace:kube-system,Attempt:0,}" Sep 12 17:44:28.028233 containerd[1979]: time="2025-09-12T17:44:28.028189334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d78848566-lvdjs,Uid:92a98262-021b-493d-b45d-176a655b449a,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:28.040728 containerd[1979]: time="2025-09-12T17:44:28.040616598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88b68d46-gcsq2,Uid:ae71cbe7-247e-4add-9e62-3d52dac5dc6c,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:28.050852 containerd[1979]: time="2025-09-12T17:44:28.050559996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wm2jb,Uid:f3d2640f-4a08-48ab-aceb-a50afe09f769,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:28.061853 containerd[1979]: time="2025-09-12T17:44:28.061811798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-vc7r8,Uid:d4dc518c-dc79-425e-b59d-e5f73f27e330,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:44:28.077049 containerd[1979]: time="2025-09-12T17:44:28.076898163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-xx7p7,Uid:c89a2a04-b2de-4ae0-95af-29cde938d697,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:44:28.167370 containerd[1979]: time="2025-09-12T17:44:28.167309304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:44:28.496107 containerd[1979]: time="2025-09-12T17:44:28.494918405Z" level=error msg="Failed to destroy network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.496107 containerd[1979]: time="2025-09-12T17:44:28.495090520Z" level=error msg="Failed to destroy network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.499632 containerd[1979]: time="2025-09-12T17:44:28.499515432Z" level=error msg="encountered an error cleaning up failed sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.499632 containerd[1979]: time="2025-09-12T17:44:28.499621923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rww8b,Uid:dce169d5-8166-4cc4-9317-2154c59c4245,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.500098 containerd[1979]: time="2025-09-12T17:44:28.500070588Z" level=error msg="encountered an error cleaning up failed sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.500152 containerd[1979]: time="2025-09-12T17:44:28.500118574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d78848566-lvdjs,Uid:92a98262-021b-493d-b45d-176a655b449a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.509509 containerd[1979]: time="2025-09-12T17:44:28.509472012Z" level=error msg="Failed to destroy network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.509634 kubelet[3173]: E0912 17:44:28.509486 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.509634 kubelet[3173]: E0912 17:44:28.509560 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d78848566-lvdjs" Sep 12 17:44:28.509634 kubelet[3173]: E0912 17:44:28.509580 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d78848566-lvdjs" Sep 12 17:44:28.509768 kubelet[3173]: E0912 17:44:28.509626 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d78848566-lvdjs_calico-system(92a98262-021b-493d-b45d-176a655b449a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d78848566-lvdjs_calico-system(92a98262-021b-493d-b45d-176a655b449a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d78848566-lvdjs" podUID="92a98262-021b-493d-b45d-176a655b449a" Sep 12 17:44:28.509976 kubelet[3173]: E0912 17:44:28.509941 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.510487 kubelet[3173]: E0912 17:44:28.509987 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rww8b" Sep 12 17:44:28.510487 kubelet[3173]: E0912 17:44:28.510005 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rww8b" Sep 12 17:44:28.510487 kubelet[3173]: E0912 17:44:28.510064 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rww8b_kube-system(dce169d5-8166-4cc4-9317-2154c59c4245)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rww8b_kube-system(dce169d5-8166-4cc4-9317-2154c59c4245)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rww8b" podUID="dce169d5-8166-4cc4-9317-2154c59c4245" Sep 12 17:44:28.510864 containerd[1979]: time="2025-09-12T17:44:28.510833727Z" level=error msg="encountered an error cleaning up failed sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.514202 containerd[1979]: time="2025-09-12T17:44:28.514165570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-xx7p7,Uid:c89a2a04-b2de-4ae0-95af-29cde938d697,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.519447 kubelet[3173]: E0912 17:44:28.519407 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.519579 kubelet[3173]: E0912 17:44:28.519473 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" Sep 12 17:44:28.519579 kubelet[3173]: E0912 17:44:28.519492 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" Sep 12 17:44:28.519579 kubelet[3173]: E0912 17:44:28.519536 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f8688b7d-xx7p7_calico-apiserver(c89a2a04-b2de-4ae0-95af-29cde938d697)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f8688b7d-xx7p7_calico-apiserver(c89a2a04-b2de-4ae0-95af-29cde938d697)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" podUID="c89a2a04-b2de-4ae0-95af-29cde938d697" Sep 12 17:44:28.535454 containerd[1979]: time="2025-09-12T17:44:28.535228614Z" level=error msg="Failed to destroy network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.537400 containerd[1979]: time="2025-09-12T17:44:28.537305539Z" level=error msg="Failed to destroy network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.541625 containerd[1979]: time="2025-09-12T17:44:28.538368854Z" level=error msg="encountered an error cleaning up failed sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.541625 containerd[1979]: time="2025-09-12T17:44:28.538525050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xd7b8,Uid:9ec97501-8553-450b-a6ff-0411fa4b5fba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.541813 kubelet[3173]: E0912 17:44:28.539928 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.541813 kubelet[3173]: E0912 17:44:28.540012 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xd7b8" Sep 12 17:44:28.541813 kubelet[3173]: E0912 17:44:28.540110 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xd7b8" Sep 12 17:44:28.542049 kubelet[3173]: E0912 17:44:28.540178 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xd7b8_kube-system(9ec97501-8553-450b-a6ff-0411fa4b5fba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xd7b8_kube-system(9ec97501-8553-450b-a6ff-0411fa4b5fba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xd7b8" podUID="9ec97501-8553-450b-a6ff-0411fa4b5fba" Sep 12 17:44:28.545154 containerd[1979]: time="2025-09-12T17:44:28.545118876Z" level=error msg="Failed to destroy network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.545537 containerd[1979]: time="2025-09-12T17:44:28.545514797Z" level=error msg="encountered an error cleaning up failed sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.545634 containerd[1979]: time="2025-09-12T17:44:28.545616566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88b68d46-gcsq2,Uid:ae71cbe7-247e-4add-9e62-3d52dac5dc6c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.545903 kubelet[3173]: E0912 17:44:28.545876 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.546123 kubelet[3173]: E0912 17:44:28.546106 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" Sep 12 17:44:28.546901 kubelet[3173]: E0912 17:44:28.546653 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" Sep 12 17:44:28.546901 kubelet[3173]: E0912 17:44:28.546753 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-88b68d46-gcsq2_calico-system(ae71cbe7-247e-4add-9e62-3d52dac5dc6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-88b68d46-gcsq2_calico-system(ae71cbe7-247e-4add-9e62-3d52dac5dc6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" podUID="ae71cbe7-247e-4add-9e62-3d52dac5dc6c" Sep 12 17:44:28.551290 containerd[1979]: time="2025-09-12T17:44:28.551229333Z" level=error msg="encountered an error cleaning up failed sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.551631 containerd[1979]: time="2025-09-12T17:44:28.551313313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wm2jb,Uid:f3d2640f-4a08-48ab-aceb-a50afe09f769,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.552065 kubelet[3173]: E0912 17:44:28.551503 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.552162 kubelet[3173]: E0912 17:44:28.552075 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-wm2jb" Sep 12 17:44:28.552162 kubelet[3173]: E0912 17:44:28.552097 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-wm2jb" Sep 12 17:44:28.552162 kubelet[3173]: E0912 17:44:28.552149 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-wm2jb_calico-system(f3d2640f-4a08-48ab-aceb-a50afe09f769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-wm2jb_calico-system(f3d2640f-4a08-48ab-aceb-a50afe09f769)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wm2jb" podUID="f3d2640f-4a08-48ab-aceb-a50afe09f769" Sep 12 17:44:28.553110 containerd[1979]: time="2025-09-12T17:44:28.552887176Z" level=error msg="Failed to destroy network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.553846 containerd[1979]: time="2025-09-12T17:44:28.553801500Z" level=error msg="encountered an error cleaning up failed sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.553966 containerd[1979]: time="2025-09-12T17:44:28.553854059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-vc7r8,Uid:d4dc518c-dc79-425e-b59d-e5f73f27e330,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.556272 kubelet[3173]: E0912 17:44:28.556217 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:28.556355 kubelet[3173]: E0912 17:44:28.556277 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" Sep 12 17:44:28.556355 kubelet[3173]: E0912 17:44:28.556295 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" Sep 12 17:44:28.556355 kubelet[3173]: E0912 17:44:28.556340 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f8688b7d-vc7r8_calico-apiserver(d4dc518c-dc79-425e-b59d-e5f73f27e330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f8688b7d-vc7r8_calico-apiserver(d4dc518c-dc79-425e-b59d-e5f73f27e330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" podUID="d4dc518c-dc79-425e-b59d-e5f73f27e330" Sep 12 17:44:29.167831 kubelet[3173]: I0912 17:44:29.167632 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:29.169960 kubelet[3173]: I0912 17:44:29.169276 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:29.207746 kubelet[3173]: I0912 17:44:29.207402 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:29.210795 kubelet[3173]: I0912 17:44:29.210767 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:29.213565 kubelet[3173]: I0912 17:44:29.213542 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:29.215353 kubelet[3173]: I0912 17:44:29.215326 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:29.218950 kubelet[3173]: I0912 17:44:29.218925 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:29.229370 containerd[1979]: time="2025-09-12T17:44:29.229112018Z" level=info msg="StopPodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\"" Sep 12 17:44:29.231041 containerd[1979]: time="2025-09-12T17:44:29.230060992Z" level=info msg="StopPodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\"" Sep 12 17:44:29.231041 containerd[1979]: time="2025-09-12T17:44:29.230813486Z" level=info msg="Ensure that sandbox a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84 in task-service has been cleanup successfully" Sep 12 17:44:29.231278 containerd[1979]: time="2025-09-12T17:44:29.231180817Z" level=info msg="StopPodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\"" Sep 12 17:44:29.231340 containerd[1979]: time="2025-09-12T17:44:29.231320126Z" level=info msg="Ensure that sandbox 766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8 in task-service has been cleanup successfully" Sep 12 17:44:29.231425 containerd[1979]: time="2025-09-12T17:44:29.231406511Z" level=info msg="StopPodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\"" Sep 12 17:44:29.231636 containerd[1979]: time="2025-09-12T17:44:29.231612931Z" level=info msg="Ensure that sandbox cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a in task-service has been cleanup successfully" Sep 12 17:44:29.234226 containerd[1979]: time="2025-09-12T17:44:29.232162264Z" level=info msg="StopPodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\"" Sep 12 17:44:29.234367 containerd[1979]: time="2025-09-12T17:44:29.231320842Z" level=info msg="Ensure that sandbox a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d in task-service has been cleanup successfully" Sep 12 17:44:29.234600 containerd[1979]: time="2025-09-12T17:44:29.232186542Z" level=info msg="StopPodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\"" Sep 12 17:44:29.235086 containerd[1979]: time="2025-09-12T17:44:29.235066892Z" level=info msg="Ensure that sandbox 1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf in task-service has been cleanup successfully" Sep 12 17:44:29.235592 containerd[1979]: time="2025-09-12T17:44:29.235306833Z" level=info msg="Ensure that sandbox 6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983 in task-service has been cleanup successfully" Sep 12 17:44:29.238885 containerd[1979]: time="2025-09-12T17:44:29.232210817Z" level=info msg="StopPodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\"" Sep 12 17:44:29.239456 containerd[1979]: time="2025-09-12T17:44:29.239405784Z" level=info msg="Ensure that sandbox 7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa in task-service has been cleanup successfully" Sep 12 17:44:29.352266 containerd[1979]: time="2025-09-12T17:44:29.352151482Z" level=error msg="StopPodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" failed" error="failed to destroy network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.352450 kubelet[3173]: E0912 17:44:29.352392 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:29.354947 containerd[1979]: time="2025-09-12T17:44:29.354905401Z" level=error msg="StopPodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" failed" error="failed to destroy network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.355356 containerd[1979]: time="2025-09-12T17:44:29.355329821Z" level=error msg="StopPodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" failed" error="failed to destroy network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.355466 kubelet[3173]: E0912 17:44:29.355348 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:29.358788 containerd[1979]: time="2025-09-12T17:44:29.358745370Z" level=error msg="StopPodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" failed" error="failed to destroy network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.362910 containerd[1979]: time="2025-09-12T17:44:29.362122112Z" level=error msg="StopPodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" failed" error="failed to destroy network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.363146 containerd[1979]: time="2025-09-12T17:44:29.363118388Z" level=error msg="StopPodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" failed" error="failed to destroy network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.363925 containerd[1979]: time="2025-09-12T17:44:29.363891434Z" level=error msg="StopPodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" failed" error="failed to destroy network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.364150 kubelet[3173]: E0912 17:44:29.355388 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d"} Sep 12 17:44:29.364150 kubelet[3173]: E0912 17:44:29.352443 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a"} Sep 12 17:44:29.364150 kubelet[3173]: E0912 17:44:29.364119 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae71cbe7-247e-4add-9e62-3d52dac5dc6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.364541 kubelet[3173]: E0912 17:44:29.364159 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ec97501-8553-450b-a6ff-0411fa4b5fba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.364541 kubelet[3173]: E0912 17:44:29.364199 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae71cbe7-247e-4add-9e62-3d52dac5dc6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" podUID="ae71cbe7-247e-4add-9e62-3d52dac5dc6c" Sep 12 17:44:29.364541 kubelet[3173]: E0912 17:44:29.364180 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ec97501-8553-450b-a6ff-0411fa4b5fba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xd7b8" podUID="9ec97501-8553-450b-a6ff-0411fa4b5fba" Sep 12 17:44:29.364837 kubelet[3173]: E0912 17:44:29.364413 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:29.364837 kubelet[3173]: E0912 17:44:29.364442 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8"} Sep 12 17:44:29.364837 kubelet[3173]: E0912 17:44:29.364489 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3d2640f-4a08-48ab-aceb-a50afe09f769\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.364837 kubelet[3173]: E0912 17:44:29.364507 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3d2640f-4a08-48ab-aceb-a50afe09f769\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wm2jb" podUID="f3d2640f-4a08-48ab-aceb-a50afe09f769" Sep 12 17:44:29.364972 kubelet[3173]: E0912 17:44:29.364527 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:29.364972 kubelet[3173]: E0912 17:44:29.364549 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84"} Sep 12 17:44:29.364972 kubelet[3173]: E0912 17:44:29.364573 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92a98262-021b-493d-b45d-176a655b449a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.364972 kubelet[3173]: E0912 17:44:29.364596 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92a98262-021b-493d-b45d-176a655b449a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d78848566-lvdjs" podUID="92a98262-021b-493d-b45d-176a655b449a" Sep 12 17:44:29.365140 kubelet[3173]: E0912 17:44:29.364617 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:29.365140 kubelet[3173]: E0912 17:44:29.364630 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983"} Sep 12 17:44:29.365140 kubelet[3173]: E0912 17:44:29.364646 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4dc518c-dc79-425e-b59d-e5f73f27e330\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.365140 kubelet[3173]: E0912 17:44:29.364660 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4dc518c-dc79-425e-b59d-e5f73f27e330\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" podUID="d4dc518c-dc79-425e-b59d-e5f73f27e330" Sep 12 17:44:29.365282 kubelet[3173]: E0912 17:44:29.364703 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:29.365282 kubelet[3173]: E0912 17:44:29.364738 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:29.365282 kubelet[3173]: E0912 17:44:29.364771 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf"} Sep 12 17:44:29.365282 kubelet[3173]: E0912 17:44:29.364795 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c89a2a04-b2de-4ae0-95af-29cde938d697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.365408 kubelet[3173]: E0912 17:44:29.364809 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c89a2a04-b2de-4ae0-95af-29cde938d697\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" podUID="c89a2a04-b2de-4ae0-95af-29cde938d697" Sep 12 17:44:29.365408 kubelet[3173]: E0912 17:44:29.364718 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa"} Sep 12 17:44:29.365408 kubelet[3173]: E0912 17:44:29.364848 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dce169d5-8166-4cc4-9317-2154c59c4245\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:29.365408 kubelet[3173]: E0912 17:44:29.364863 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dce169d5-8166-4cc4-9317-2154c59c4245\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rww8b" podUID="dce169d5-8166-4cc4-9317-2154c59c4245" Sep 12 17:44:29.579317 systemd[1]: Created slice kubepods-besteffort-podee7b516d_8c24_4672_9d6b_95b402198b55.slice - libcontainer container kubepods-besteffort-podee7b516d_8c24_4672_9d6b_95b402198b55.slice. Sep 12 17:44:29.592449 containerd[1979]: time="2025-09-12T17:44:29.591656373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b4wvz,Uid:ee7b516d-8c24-4672-9d6b-95b402198b55,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:29.765695 containerd[1979]: time="2025-09-12T17:44:29.765503265Z" level=error msg="Failed to destroy network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.771298 containerd[1979]: time="2025-09-12T17:44:29.768299551Z" level=error msg="encountered an error cleaning up failed sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.774326 containerd[1979]: time="2025-09-12T17:44:29.772152406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b4wvz,Uid:ee7b516d-8c24-4672-9d6b-95b402198b55,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.778911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773-shm.mount: Deactivated successfully. Sep 12 17:44:29.783050 kubelet[3173]: E0912 17:44:29.782981 3173 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:29.783795 kubelet[3173]: E0912 17:44:29.783326 3173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:29.783795 kubelet[3173]: E0912 17:44:29.783559 3173 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b4wvz" Sep 12 17:44:29.784413 kubelet[3173]: E0912 17:44:29.783764 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b4wvz_calico-system(ee7b516d-8c24-4672-9d6b-95b402198b55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b4wvz_calico-system(ee7b516d-8c24-4672-9d6b-95b402198b55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:30.224796 kubelet[3173]: I0912 17:44:30.224707 3173 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:30.232382 containerd[1979]: time="2025-09-12T17:44:30.229397372Z" level=info msg="StopPodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\"" Sep 12 17:44:30.232382 containerd[1979]: time="2025-09-12T17:44:30.229619481Z" level=info msg="Ensure that sandbox ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773 in task-service has been cleanup successfully" Sep 12 17:44:30.295788 containerd[1979]: time="2025-09-12T17:44:30.295665950Z" level=error msg="StopPodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" failed" error="failed to destroy network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:44:30.296264 kubelet[3173]: E0912 17:44:30.295963 3173 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:30.296264 kubelet[3173]: E0912 17:44:30.296047 3173 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773"} Sep 12 17:44:30.296264 kubelet[3173]: E0912 17:44:30.296103 3173 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee7b516d-8c24-4672-9d6b-95b402198b55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:44:30.296264 kubelet[3173]: E0912 17:44:30.296135 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee7b516d-8c24-4672-9d6b-95b402198b55\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b4wvz" podUID="ee7b516d-8c24-4672-9d6b-95b402198b55" Sep 12 17:44:34.806905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715341081.mount: Deactivated successfully. Sep 12 17:44:34.861828 containerd[1979]: time="2025-09-12T17:44:34.861702670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:34.869839 containerd[1979]: time="2025-09-12T17:44:34.869744585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:44:34.892094 containerd[1979]: time="2025-09-12T17:44:34.892013835Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:34.904463 containerd[1979]: time="2025-09-12T17:44:34.902789939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:34.904463 containerd[1979]: time="2025-09-12T17:44:34.903704994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.7358078s" Sep 12 17:44:34.904463 containerd[1979]: time="2025-09-12T17:44:34.904269949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:44:34.975796 containerd[1979]: time="2025-09-12T17:44:34.975731502Z" level=info msg="CreateContainer within sandbox \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:44:35.032917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764886966.mount: Deactivated successfully. Sep 12 17:44:35.050214 containerd[1979]: time="2025-09-12T17:44:35.050161376Z" level=info msg="CreateContainer within sandbox \"f399b3cec7f477a40472e6011a7032668d11005d4ca44e90ee6a29aa2fa97329\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc\"" Sep 12 17:44:35.050719 containerd[1979]: time="2025-09-12T17:44:35.050698477Z" level=info msg="StartContainer for \"1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc\"" Sep 12 17:44:35.229335 systemd[1]: Started cri-containerd-1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc.scope - libcontainer container 1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc. Sep 12 17:44:35.306161 containerd[1979]: time="2025-09-12T17:44:35.306115097Z" level=info msg="StartContainer for \"1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc\" returns successfully" Sep 12 17:44:35.499049 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:44:35.501632 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:44:35.942040 containerd[1979]: time="2025-09-12T17:44:35.941122736Z" level=info msg="StopPodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\"" Sep 12 17:44:36.320988 kubelet[3173]: I0912 17:44:36.317709 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bcrbv" podStartSLOduration=2.940111925 podStartE2EDuration="19.296060969s" podCreationTimestamp="2025-09-12 17:44:17 +0000 UTC" firstStartedPulling="2025-09-12 17:44:18.549947881 +0000 UTC m=+25.112587078" lastFinishedPulling="2025-09-12 17:44:34.905896927 +0000 UTC m=+41.468536122" observedRunningTime="2025-09-12 17:44:36.294635356 +0000 UTC m=+42.857274559" watchObservedRunningTime="2025-09-12 17:44:36.296060969 +0000 UTC m=+42.858700215" Sep 12 17:44:36.378826 systemd[1]: run-containerd-runc-k8s.io-1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc-runc.nJJoqA.mount: Deactivated successfully. Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.083 [INFO][4552] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.085 [INFO][4552] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" iface="eth0" netns="/var/run/netns/cni-56677ec1-2fe0-0e37-2f6e-8c9f28f0e658" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.087 [INFO][4552] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" iface="eth0" netns="/var/run/netns/cni-56677ec1-2fe0-0e37-2f6e-8c9f28f0e658" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.090 [INFO][4552] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" iface="eth0" netns="/var/run/netns/cni-56677ec1-2fe0-0e37-2f6e-8c9f28f0e658" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.090 [INFO][4552] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.090 [INFO][4552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.571 [INFO][4564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.579 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.581 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.604 [WARNING][4564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.605 [INFO][4564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.606 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:36.611694 containerd[1979]: 2025-09-12 17:44:36.609 [INFO][4552] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:36.615424 systemd[1]: run-netns-cni\x2d56677ec1\x2d2fe0\x2d0e37\x2d2f6e\x2d8c9f28f0e658.mount: Deactivated successfully. Sep 12 17:44:36.621930 containerd[1979]: time="2025-09-12T17:44:36.621870115Z" level=info msg="TearDown network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" successfully" Sep 12 17:44:36.621930 containerd[1979]: time="2025-09-12T17:44:36.621918332Z" level=info msg="StopPodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" returns successfully" Sep 12 17:44:36.672414 kubelet[3173]: I0912 17:44:36.672359 3173 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a98262-021b-493d-b45d-176a655b449a-whisker-ca-bundle\") pod \"92a98262-021b-493d-b45d-176a655b449a\" (UID: \"92a98262-021b-493d-b45d-176a655b449a\") " Sep 12 17:44:36.672414 kubelet[3173]: I0912 17:44:36.672422 3173 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rw98\" (UniqueName: \"kubernetes.io/projected/92a98262-021b-493d-b45d-176a655b449a-kube-api-access-6rw98\") pod \"92a98262-021b-493d-b45d-176a655b449a\" (UID: \"92a98262-021b-493d-b45d-176a655b449a\") " Sep 12 17:44:36.674551 kubelet[3173]: I0912 17:44:36.672451 3173 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92a98262-021b-493d-b45d-176a655b449a-whisker-backend-key-pair\") pod \"92a98262-021b-493d-b45d-176a655b449a\" (UID: \"92a98262-021b-493d-b45d-176a655b449a\") " Sep 12 17:44:36.703769 systemd[1]: var-lib-kubelet-pods-92a98262\x2d021b\x2d493d\x2db45d\x2d176a655b449a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6rw98.mount: Deactivated successfully. Sep 12 17:44:36.704080 systemd[1]: var-lib-kubelet-pods-92a98262\x2d021b\x2d493d\x2db45d\x2d176a655b449a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:44:36.707784 kubelet[3173]: I0912 17:44:36.705624 3173 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a98262-021b-493d-b45d-176a655b449a-kube-api-access-6rw98" (OuterVolumeSpecName: "kube-api-access-6rw98") pod "92a98262-021b-493d-b45d-176a655b449a" (UID: "92a98262-021b-493d-b45d-176a655b449a"). InnerVolumeSpecName "kube-api-access-6rw98". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:44:36.707964 kubelet[3173]: I0912 17:44:36.706153 3173 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92a98262-021b-493d-b45d-176a655b449a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "92a98262-021b-493d-b45d-176a655b449a" (UID: "92a98262-021b-493d-b45d-176a655b449a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:44:36.708109 kubelet[3173]: I0912 17:44:36.708008 3173 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92a98262-021b-493d-b45d-176a655b449a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "92a98262-021b-493d-b45d-176a655b449a" (UID: "92a98262-021b-493d-b45d-176a655b449a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:44:36.773643 kubelet[3173]: I0912 17:44:36.773598 3173 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rw98\" (UniqueName: \"kubernetes.io/projected/92a98262-021b-493d-b45d-176a655b449a-kube-api-access-6rw98\") on node \"ip-172-31-28-238\" DevicePath \"\"" Sep 12 17:44:36.773643 kubelet[3173]: I0912 17:44:36.773634 3173 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92a98262-021b-493d-b45d-176a655b449a-whisker-backend-key-pair\") on node \"ip-172-31-28-238\" DevicePath \"\"" Sep 12 17:44:36.773643 kubelet[3173]: I0912 17:44:36.773652 3173 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a98262-021b-493d-b45d-176a655b449a-whisker-ca-bundle\") on node \"ip-172-31-28-238\" DevicePath \"\"" Sep 12 17:44:37.290065 systemd[1]: Removed slice kubepods-besteffort-pod92a98262_021b_493d_b45d_176a655b449a.slice - libcontainer container kubepods-besteffort-pod92a98262_021b_493d_b45d_176a655b449a.slice. Sep 12 17:44:37.324286 systemd[1]: run-containerd-runc-k8s.io-1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc-runc.0u1mzQ.mount: Deactivated successfully. Sep 12 17:44:37.442342 systemd[1]: Created slice kubepods-besteffort-pod80087975_3dbe_4013_abb2_c170daacc281.slice - libcontainer container kubepods-besteffort-pod80087975_3dbe_4013_abb2_c170daacc281.slice. Sep 12 17:44:37.477369 kubelet[3173]: I0912 17:44:37.477325 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80087975-3dbe-4013-abb2-c170daacc281-whisker-backend-key-pair\") pod \"whisker-54b95dd557-g7qqj\" (UID: \"80087975-3dbe-4013-abb2-c170daacc281\") " pod="calico-system/whisker-54b95dd557-g7qqj" Sep 12 17:44:37.478794 kubelet[3173]: I0912 17:44:37.477388 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvp9z\" (UniqueName: \"kubernetes.io/projected/80087975-3dbe-4013-abb2-c170daacc281-kube-api-access-kvp9z\") pod \"whisker-54b95dd557-g7qqj\" (UID: \"80087975-3dbe-4013-abb2-c170daacc281\") " pod="calico-system/whisker-54b95dd557-g7qqj" Sep 12 17:44:37.478794 kubelet[3173]: I0912 17:44:37.477437 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80087975-3dbe-4013-abb2-c170daacc281-whisker-ca-bundle\") pod \"whisker-54b95dd557-g7qqj\" (UID: \"80087975-3dbe-4013-abb2-c170daacc281\") " pod="calico-system/whisker-54b95dd557-g7qqj" Sep 12 17:44:37.570242 kubelet[3173]: I0912 17:44:37.569948 3173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a98262-021b-493d-b45d-176a655b449a" path="/var/lib/kubelet/pods/92a98262-021b-493d-b45d-176a655b449a/volumes" Sep 12 17:44:37.747051 containerd[1979]: time="2025-09-12T17:44:37.746979935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b95dd557-g7qqj,Uid:80087975-3dbe-4013-abb2-c170daacc281,Namespace:calico-system,Attempt:0,}" Sep 12 17:44:37.781113 kernel: bpftool[4727]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:44:38.025100 systemd-networkd[1820]: cali5024fb81ed9: Link UP Sep 12 17:44:38.025679 (udev-worker)[4525]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:44:38.029293 systemd-networkd[1820]: cali5024fb81ed9: Gained carrier Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.873 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0 whisker-54b95dd557- calico-system 80087975-3dbe-4013-abb2-c170daacc281 921 0 2025-09-12 17:44:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54b95dd557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-238 whisker-54b95dd557-g7qqj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5024fb81ed9 [] [] }} ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.873 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.920 [INFO][4752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" HandleID="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Workload="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.920 [INFO][4752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" HandleID="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Workload="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-238", "pod":"whisker-54b95dd557-g7qqj", "timestamp":"2025-09-12 17:44:37.920424481 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.921 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.921 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.921 [INFO][4752] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.934 [INFO][4752] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.954 [INFO][4752] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.961 [INFO][4752] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.964 [INFO][4752] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.967 [INFO][4752] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.967 [INFO][4752] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.969 [INFO][4752] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2 Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.975 [INFO][4752] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.986 [INFO][4752] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.193/26] block=192.168.96.192/26 handle="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.986 [INFO][4752] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.193/26] handle="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" host="ip-172-31-28-238" Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.986 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:38.074674 containerd[1979]: 2025-09-12 17:44:37.987 [INFO][4752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.193/26] IPv6=[] ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" HandleID="k8s-pod-network.5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Workload="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.075986 containerd[1979]: 2025-09-12 17:44:37.993 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0", GenerateName:"whisker-54b95dd557-", Namespace:"calico-system", SelfLink:"", UID:"80087975-3dbe-4013-abb2-c170daacc281", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b95dd557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"whisker-54b95dd557-g7qqj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5024fb81ed9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:38.075986 containerd[1979]: 2025-09-12 17:44:37.994 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.193/32] ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.075986 containerd[1979]: 2025-09-12 17:44:37.994 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5024fb81ed9 ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.075986 containerd[1979]: 2025-09-12 17:44:38.029 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.075986 containerd[1979]: 2025-09-12 17:44:38.031 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0", GenerateName:"whisker-54b95dd557-", Namespace:"calico-system", SelfLink:"", UID:"80087975-3dbe-4013-abb2-c170daacc281", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b95dd557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2", Pod:"whisker-54b95dd557-g7qqj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5024fb81ed9", MAC:"36:63:f4:9c:53:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:38.075986 containerd[1979]: 2025-09-12 17:44:38.051 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2" Namespace="calico-system" Pod="whisker-54b95dd557-g7qqj" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--54b95dd557--g7qqj-eth0" Sep 12 17:44:38.162348 containerd[1979]: time="2025-09-12T17:44:38.160957144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:38.162348 containerd[1979]: time="2025-09-12T17:44:38.161055675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:38.162348 containerd[1979]: time="2025-09-12T17:44:38.161072470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:38.162348 containerd[1979]: time="2025-09-12T17:44:38.162146310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:38.186300 systemd[1]: Started cri-containerd-5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2.scope - libcontainer container 5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2. Sep 12 17:44:38.246217 systemd-networkd[1820]: vxlan.calico: Link UP Sep 12 17:44:38.246230 systemd-networkd[1820]: vxlan.calico: Gained carrier Sep 12 17:44:38.302773 (udev-worker)[4773]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:44:38.353305 containerd[1979]: time="2025-09-12T17:44:38.353263255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b95dd557-g7qqj,Uid:80087975-3dbe-4013-abb2-c170daacc281,Namespace:calico-system,Attempt:0,} returns sandbox id \"5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2\"" Sep 12 17:44:38.361511 systemd[1]: run-containerd-runc-k8s.io-1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc-runc.wgq5zc.mount: Deactivated successfully. Sep 12 17:44:38.374455 containerd[1979]: time="2025-09-12T17:44:38.372987863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:44:39.424733 systemd-networkd[1820]: cali5024fb81ed9: Gained IPv6LL Sep 12 17:44:39.564110 containerd[1979]: time="2025-09-12T17:44:39.563519432Z" level=info msg="StopPodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\"" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.664 [INFO][4921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.665 [INFO][4921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" iface="eth0" netns="/var/run/netns/cni-605028a6-20f8-5670-b103-a9d93f753794" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.665 [INFO][4921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" iface="eth0" netns="/var/run/netns/cni-605028a6-20f8-5670-b103-a9d93f753794" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.666 [INFO][4921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" iface="eth0" netns="/var/run/netns/cni-605028a6-20f8-5670-b103-a9d93f753794" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.666 [INFO][4921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.666 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.724 [INFO][4928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.724 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.724 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.735 [WARNING][4928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.735 [INFO][4928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.737 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:39.742677 containerd[1979]: 2025-09-12 17:44:39.739 [INFO][4921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:39.743343 containerd[1979]: time="2025-09-12T17:44:39.743112403Z" level=info msg="TearDown network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" successfully" Sep 12 17:44:39.743343 containerd[1979]: time="2025-09-12T17:44:39.743147977Z" level=info msg="StopPodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" returns successfully" Sep 12 17:44:39.747290 containerd[1979]: time="2025-09-12T17:44:39.747123916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rww8b,Uid:dce169d5-8166-4cc4-9317-2154c59c4245,Namespace:kube-system,Attempt:1,}" Sep 12 17:44:39.747946 systemd[1]: run-netns-cni\x2d605028a6\x2d20f8\x2d5670\x2db103\x2da9d93f753794.mount: Deactivated successfully. Sep 12 17:44:39.760266 containerd[1979]: time="2025-09-12T17:44:39.760206448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:39.785673 containerd[1979]: time="2025-09-12T17:44:39.785360085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:44:39.788316 containerd[1979]: time="2025-09-12T17:44:39.788259958Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:39.808509 systemd-networkd[1820]: vxlan.calico: Gained IPv6LL Sep 12 17:44:39.811490 containerd[1979]: time="2025-09-12T17:44:39.810007379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:39.820605 containerd[1979]: time="2025-09-12T17:44:39.820557851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.444832621s" Sep 12 17:44:39.820605 containerd[1979]: time="2025-09-12T17:44:39.820600948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:44:39.834497 containerd[1979]: time="2025-09-12T17:44:39.834453810Z" level=info msg="CreateContainer within sandbox \"5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:44:39.869914 containerd[1979]: time="2025-09-12T17:44:39.869868696Z" level=info msg="CreateContainer within sandbox \"5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d955411ea6a0428b810e937a36c96a21aa8dbbf64e4521368ccbb4ad5f14003a\"" Sep 12 17:44:39.872808 containerd[1979]: time="2025-09-12T17:44:39.871252438Z" level=info msg="StartContainer for \"d955411ea6a0428b810e937a36c96a21aa8dbbf64e4521368ccbb4ad5f14003a\"" Sep 12 17:44:39.964733 systemd[1]: Started cri-containerd-d955411ea6a0428b810e937a36c96a21aa8dbbf64e4521368ccbb4ad5f14003a.scope - libcontainer container d955411ea6a0428b810e937a36c96a21aa8dbbf64e4521368ccbb4ad5f14003a. Sep 12 17:44:40.015639 (udev-worker)[4852]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:44:40.017845 systemd-networkd[1820]: calic3393ae3394: Link UP Sep 12 17:44:40.018162 systemd-networkd[1820]: calic3393ae3394: Gained carrier Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.878 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0 coredns-7c65d6cfc9- kube-system dce169d5-8166-4cc4-9317-2154c59c4245 932 0 2025-09-12 17:44:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-238 coredns-7c65d6cfc9-rww8b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3393ae3394 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.878 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.926 [INFO][4953] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" HandleID="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.929 [INFO][4953] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" HandleID="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf900), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-238", "pod":"coredns-7c65d6cfc9-rww8b", "timestamp":"2025-09-12 17:44:39.925846868 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.930 [INFO][4953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.930 [INFO][4953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.930 [INFO][4953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.946 [INFO][4953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.961 [INFO][4953] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.976 [INFO][4953] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.982 [INFO][4953] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.987 [INFO][4953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.987 [INFO][4953] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.990 [INFO][4953] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50 Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:39.998 [INFO][4953] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:40.009 [INFO][4953] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.194/26] block=192.168.96.192/26 handle="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:40.009 [INFO][4953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.194/26] handle="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" host="ip-172-31-28-238" Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:40.009 [INFO][4953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:40.070264 containerd[1979]: 2025-09-12 17:44:40.009 [INFO][4953] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.194/26] IPv6=[] ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" HandleID="k8s-pod-network.6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.072378 containerd[1979]: 2025-09-12 17:44:40.012 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dce169d5-8166-4cc4-9317-2154c59c4245", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"coredns-7c65d6cfc9-rww8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3393ae3394", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:40.072378 containerd[1979]: 2025-09-12 17:44:40.013 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.194/32] ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.072378 containerd[1979]: 2025-09-12 17:44:40.013 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3393ae3394 ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.072378 containerd[1979]: 2025-09-12 17:44:40.016 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.072378 containerd[1979]: 2025-09-12 17:44:40.017 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dce169d5-8166-4cc4-9317-2154c59c4245", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50", Pod:"coredns-7c65d6cfc9-rww8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3393ae3394", MAC:"06:f0:55:c9:dd:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:40.072378 containerd[1979]: 2025-09-12 17:44:40.059 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rww8b" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:40.086490 containerd[1979]: time="2025-09-12T17:44:40.086351244Z" level=info msg="StartContainer for \"d955411ea6a0428b810e937a36c96a21aa8dbbf64e4521368ccbb4ad5f14003a\" returns successfully" Sep 12 17:44:40.091613 containerd[1979]: time="2025-09-12T17:44:40.091357873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:44:40.131630 containerd[1979]: time="2025-09-12T17:44:40.126546360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:40.131630 containerd[1979]: time="2025-09-12T17:44:40.126630966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:40.131630 containerd[1979]: time="2025-09-12T17:44:40.126652782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:40.131630 containerd[1979]: time="2025-09-12T17:44:40.126771747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:40.181274 systemd[1]: Started cri-containerd-6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50.scope - libcontainer container 6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50. Sep 12 17:44:40.275268 containerd[1979]: time="2025-09-12T17:44:40.274276213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rww8b,Uid:dce169d5-8166-4cc4-9317-2154c59c4245,Namespace:kube-system,Attempt:1,} returns sandbox id \"6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50\"" Sep 12 17:44:40.283182 containerd[1979]: time="2025-09-12T17:44:40.282645196Z" level=info msg="CreateContainer within sandbox \"6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:44:40.328934 containerd[1979]: time="2025-09-12T17:44:40.328893474Z" level=info msg="CreateContainer within sandbox \"6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8987d58ce2f924ae6a58782a33e8db2f5546b1586d92c6a1651cc4d32b1c375a\"" Sep 12 17:44:40.331061 containerd[1979]: time="2025-09-12T17:44:40.330270533Z" level=info msg="StartContainer for \"8987d58ce2f924ae6a58782a33e8db2f5546b1586d92c6a1651cc4d32b1c375a\"" Sep 12 17:44:40.362433 systemd[1]: Started cri-containerd-8987d58ce2f924ae6a58782a33e8db2f5546b1586d92c6a1651cc4d32b1c375a.scope - libcontainer container 8987d58ce2f924ae6a58782a33e8db2f5546b1586d92c6a1651cc4d32b1c375a. Sep 12 17:44:40.407009 containerd[1979]: time="2025-09-12T17:44:40.406963260Z" level=info msg="StartContainer for \"8987d58ce2f924ae6a58782a33e8db2f5546b1586d92c6a1651cc4d32b1c375a\" returns successfully" Sep 12 17:44:41.411224 kubelet[3173]: I0912 17:44:41.411075 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rww8b" podStartSLOduration=41.411052252 podStartE2EDuration="41.411052252s" podCreationTimestamp="2025-09-12 17:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:44:41.385919168 +0000 UTC m=+47.948558375" watchObservedRunningTime="2025-09-12 17:44:41.411052252 +0000 UTC m=+47.973691459" Sep 12 17:44:41.473188 systemd-networkd[1820]: calic3393ae3394: Gained IPv6LL Sep 12 17:44:41.565385 containerd[1979]: time="2025-09-12T17:44:41.565122336Z" level=info msg="StopPodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\"" Sep 12 17:44:41.758465 systemd[1]: Started sshd@7-172.31.28.238:22-147.75.109.163:56490.service - OpenSSH per-connection server daemon (147.75.109.163:56490). Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.768 [INFO][5096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.768 [INFO][5096] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" iface="eth0" netns="/var/run/netns/cni-1e76b147-e501-8d37-7674-e44364935dde" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.770 [INFO][5096] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" iface="eth0" netns="/var/run/netns/cni-1e76b147-e501-8d37-7674-e44364935dde" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.771 [INFO][5096] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" iface="eth0" netns="/var/run/netns/cni-1e76b147-e501-8d37-7674-e44364935dde" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.771 [INFO][5096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.771 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.858 [INFO][5109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.858 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.858 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.870 [WARNING][5109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.870 [INFO][5109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.873 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:41.882409 containerd[1979]: 2025-09-12 17:44:41.878 [INFO][5096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:41.886045 containerd[1979]: time="2025-09-12T17:44:41.884056518Z" level=info msg="TearDown network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" successfully" Sep 12 17:44:41.886045 containerd[1979]: time="2025-09-12T17:44:41.884093773Z" level=info msg="StopPodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" returns successfully" Sep 12 17:44:41.886343 containerd[1979]: time="2025-09-12T17:44:41.886303771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b4wvz,Uid:ee7b516d-8c24-4672-9d6b-95b402198b55,Namespace:calico-system,Attempt:1,}" Sep 12 17:44:41.891558 systemd[1]: run-netns-cni\x2d1e76b147\x2de501\x2d8d37\x2d7674\x2de44364935dde.mount: Deactivated successfully. Sep 12 17:44:42.027533 sshd[5106]: Accepted publickey for core from 147.75.109.163 port 56490 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:44:42.032906 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:42.044405 systemd-logind[1965]: New session 8 of user core. Sep 12 17:44:42.049492 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:44:42.147809 systemd-networkd[1820]: calie1f997fe864: Link UP Sep 12 17:44:42.150168 systemd-networkd[1820]: calie1f997fe864: Gained carrier Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:41.996 [INFO][5118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0 csi-node-driver- calico-system ee7b516d-8c24-4672-9d6b-95b402198b55 984 0 2025-09-12 17:44:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-238 csi-node-driver-b4wvz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie1f997fe864 [] [] }} ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:41.996 [INFO][5118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.076 [INFO][5130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" HandleID="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.077 [INFO][5130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" HandleID="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-238", "pod":"csi-node-driver-b4wvz", "timestamp":"2025-09-12 17:44:42.076549357 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.077 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.077 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.077 [INFO][5130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.087 [INFO][5130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.094 [INFO][5130] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.102 [INFO][5130] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.106 [INFO][5130] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.110 [INFO][5130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.110 [INFO][5130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.113 [INFO][5130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.120 [INFO][5130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.135 [INFO][5130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.195/26] block=192.168.96.192/26 handle="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.135 [INFO][5130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.195/26] handle="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" host="ip-172-31-28-238" Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.135 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:42.201823 containerd[1979]: 2025-09-12 17:44:42.135 [INFO][5130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.195/26] IPv6=[] ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" HandleID="k8s-pod-network.cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.203385 containerd[1979]: 2025-09-12 17:44:42.141 [INFO][5118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ee7b516d-8c24-4672-9d6b-95b402198b55", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"csi-node-driver-b4wvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f997fe864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:42.203385 containerd[1979]: 2025-09-12 17:44:42.142 [INFO][5118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.195/32] ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.203385 containerd[1979]: 2025-09-12 17:44:42.142 [INFO][5118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1f997fe864 ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.203385 containerd[1979]: 2025-09-12 17:44:42.151 [INFO][5118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.203385 containerd[1979]: 2025-09-12 17:44:42.159 [INFO][5118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ee7b516d-8c24-4672-9d6b-95b402198b55", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b", Pod:"csi-node-driver-b4wvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f997fe864", MAC:"1e:3f:15:68:fe:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:42.203385 containerd[1979]: 2025-09-12 17:44:42.194 [INFO][5118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b" Namespace="calico-system" Pod="csi-node-driver-b4wvz" WorkloadEndpoint="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:42.263172 containerd[1979]: time="2025-09-12T17:44:42.262745635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:42.263172 containerd[1979]: time="2025-09-12T17:44:42.263046356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:42.263172 containerd[1979]: time="2025-09-12T17:44:42.263161688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:42.263415 containerd[1979]: time="2025-09-12T17:44:42.263292686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:42.342271 systemd[1]: Started cri-containerd-cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b.scope - libcontainer container cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b. Sep 12 17:44:42.485874 containerd[1979]: time="2025-09-12T17:44:42.485825429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b4wvz,Uid:ee7b516d-8c24-4672-9d6b-95b402198b55,Namespace:calico-system,Attempt:1,} returns sandbox id \"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b\"" Sep 12 17:44:42.564738 containerd[1979]: time="2025-09-12T17:44:42.562526639Z" level=info msg="StopPodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\"" Sep 12 17:44:42.564738 containerd[1979]: time="2025-09-12T17:44:42.564138676Z" level=info msg="StopPodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\"" Sep 12 17:44:42.572169 containerd[1979]: time="2025-09-12T17:44:42.572037061Z" level=info msg="StopPodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\"" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.758 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.758 [INFO][5222] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" iface="eth0" netns="/var/run/netns/cni-accafc51-517c-3617-7d69-f5838e9b8b45" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.760 [INFO][5222] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" iface="eth0" netns="/var/run/netns/cni-accafc51-517c-3617-7d69-f5838e9b8b45" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.762 [INFO][5222] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" iface="eth0" netns="/var/run/netns/cni-accafc51-517c-3617-7d69-f5838e9b8b45" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.762 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.762 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.948 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.948 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.948 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.967 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.967 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.969 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:42.994603 containerd[1979]: 2025-09-12 17:44:42.985 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.861 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.863 [INFO][5223] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" iface="eth0" netns="/var/run/netns/cni-72df6a5c-6c72-2de2-2c7a-4b10064eb520" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.864 [INFO][5223] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" iface="eth0" netns="/var/run/netns/cni-72df6a5c-6c72-2de2-2c7a-4b10064eb520" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.866 [INFO][5223] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" iface="eth0" netns="/var/run/netns/cni-72df6a5c-6c72-2de2-2c7a-4b10064eb520" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.868 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.868 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.964 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.966 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.969 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.982 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.982 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.990 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:42.997038 containerd[1979]: 2025-09-12 17:44:42.993 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:43.000932 containerd[1979]: time="2025-09-12T17:44:43.000440680Z" level=info msg="TearDown network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" successfully" Sep 12 17:44:43.000932 containerd[1979]: time="2025-09-12T17:44:43.000485488Z" level=info msg="StopPodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" returns successfully" Sep 12 17:44:43.002896 containerd[1979]: time="2025-09-12T17:44:43.002560505Z" level=info msg="TearDown network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" successfully" Sep 12 17:44:43.002896 containerd[1979]: time="2025-09-12T17:44:43.002600545Z" level=info msg="StopPodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" returns successfully" Sep 12 17:44:43.004406 systemd[1]: run-netns-cni\x2d72df6a5c\x2d6c72\x2d2de2\x2d2c7a\x2d4b10064eb520.mount: Deactivated successfully. Sep 12 17:44:43.004554 systemd[1]: run-netns-cni\x2daccafc51\x2d517c\x2d3617\x2d7d69\x2df5838e9b8b45.mount: Deactivated successfully. Sep 12 17:44:43.017613 containerd[1979]: time="2025-09-12T17:44:43.016856716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xd7b8,Uid:9ec97501-8553-450b-a6ff-0411fa4b5fba,Namespace:kube-system,Attempt:1,}" Sep 12 17:44:43.022562 containerd[1979]: time="2025-09-12T17:44:43.021755105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-xx7p7,Uid:c89a2a04-b2de-4ae0-95af-29cde938d697,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:42.853 [INFO][5224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:42.853 [INFO][5224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" iface="eth0" netns="/var/run/netns/cni-6b139582-bd20-f6a7-d850-d6d83976fe52" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:42.854 [INFO][5224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" iface="eth0" netns="/var/run/netns/cni-6b139582-bd20-f6a7-d850-d6d83976fe52" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:42.854 [INFO][5224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" iface="eth0" netns="/var/run/netns/cni-6b139582-bd20-f6a7-d850-d6d83976fe52" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:42.854 [INFO][5224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:42.854 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.023 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.024 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.024 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.039 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.039 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.044 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:43.070335 containerd[1979]: 2025-09-12 17:44:43.049 [INFO][5224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:43.070335 containerd[1979]: time="2025-09-12T17:44:43.069732066Z" level=info msg="TearDown network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" successfully" Sep 12 17:44:43.070335 containerd[1979]: time="2025-09-12T17:44:43.069763756Z" level=info msg="StopPodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" returns successfully" Sep 12 17:44:43.073811 containerd[1979]: time="2025-09-12T17:44:43.071839941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wm2jb,Uid:f3d2640f-4a08-48ab-aceb-a50afe09f769,Namespace:calico-system,Attempt:1,}" Sep 12 17:44:43.078693 systemd[1]: run-netns-cni\x2d6b139582\x2dbd20\x2df6a7\x2dd850\x2dd6d83976fe52.mount: Deactivated successfully. Sep 12 17:44:43.389271 sshd[5106]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:43.400185 systemd-networkd[1820]: calie1f997fe864: Gained IPv6LL Sep 12 17:44:43.408563 systemd[1]: sshd@7-172.31.28.238:22-147.75.109.163:56490.service: Deactivated successfully. Sep 12 17:44:43.417693 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:44:43.426927 systemd-logind[1965]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:44:43.434188 systemd-logind[1965]: Removed session 8. Sep 12 17:44:43.537383 systemd-networkd[1820]: cali4add7208d21: Link UP Sep 12 17:44:43.539639 systemd-networkd[1820]: cali4add7208d21: Gained carrier Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.252 [INFO][5263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0 coredns-7c65d6cfc9- kube-system 9ec97501-8553-450b-a6ff-0411fa4b5fba 997 0 2025-09-12 17:44:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-238 coredns-7c65d6cfc9-xd7b8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4add7208d21 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.252 [INFO][5263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.415 [INFO][5297] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" HandleID="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.416 [INFO][5297] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" HandleID="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123830), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-238", "pod":"coredns-7c65d6cfc9-xd7b8", "timestamp":"2025-09-12 17:44:43.415756361 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.416 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.416 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.416 [INFO][5297] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.438 [INFO][5297] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.448 [INFO][5297] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.461 [INFO][5297] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.468 [INFO][5297] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.477 [INFO][5297] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.478 [INFO][5297] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.485 [INFO][5297] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343 Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.496 [INFO][5297] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.515 [INFO][5297] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.196/26] block=192.168.96.192/26 handle="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.516 [INFO][5297] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.196/26] handle="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" host="ip-172-31-28-238" Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.516 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:43.596542 containerd[1979]: 2025-09-12 17:44:43.516 [INFO][5297] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.196/26] IPv6=[] ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" HandleID="k8s-pod-network.4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.600324 containerd[1979]: 2025-09-12 17:44:43.523 [INFO][5263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9ec97501-8553-450b-a6ff-0411fa4b5fba", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"coredns-7c65d6cfc9-xd7b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4add7208d21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:43.600324 containerd[1979]: 2025-09-12 17:44:43.524 [INFO][5263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.196/32] ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.600324 containerd[1979]: 2025-09-12 17:44:43.524 [INFO][5263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4add7208d21 ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.600324 containerd[1979]: 2025-09-12 17:44:43.551 [INFO][5263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.600324 containerd[1979]: 2025-09-12 17:44:43.555 [INFO][5263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9ec97501-8553-450b-a6ff-0411fa4b5fba", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343", Pod:"coredns-7c65d6cfc9-xd7b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4add7208d21", MAC:"6e:f2:cf:af:47:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:43.600324 containerd[1979]: 2025-09-12 17:44:43.590 [INFO][5263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xd7b8" WorkloadEndpoint="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:43.685632 systemd-networkd[1820]: calif502e55ab9a: Link UP Sep 12 17:44:43.688467 systemd-networkd[1820]: calif502e55ab9a: Gained carrier Sep 12 17:44:43.724934 containerd[1979]: time="2025-09-12T17:44:43.724376464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:43.724934 containerd[1979]: time="2025-09-12T17:44:43.724455161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:43.724934 containerd[1979]: time="2025-09-12T17:44:43.724480601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:43.724934 containerd[1979]: time="2025-09-12T17:44:43.724616349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.422 [INFO][5275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0 calico-apiserver-59f8688b7d- calico-apiserver c89a2a04-b2de-4ae0-95af-29cde938d697 999 0 2025-09-12 17:44:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f8688b7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-238 calico-apiserver-59f8688b7d-xx7p7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif502e55ab9a [] [] }} ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.422 [INFO][5275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.544 [INFO][5317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" HandleID="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.550 [INFO][5317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" HandleID="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000287840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-238", "pod":"calico-apiserver-59f8688b7d-xx7p7", "timestamp":"2025-09-12 17:44:43.535597121 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.550 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.550 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.550 [INFO][5317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.567 [INFO][5317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.578 [INFO][5317] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.596 [INFO][5317] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.606 [INFO][5317] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.610 [INFO][5317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.611 [INFO][5317] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.615 [INFO][5317] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56 Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.623 [INFO][5317] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.638 [INFO][5317] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.197/26] block=192.168.96.192/26 handle="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.639 [INFO][5317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.197/26] handle="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" host="ip-172-31-28-238" Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.639 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:43.771217 containerd[1979]: 2025-09-12 17:44:43.639 [INFO][5317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.197/26] IPv6=[] ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" HandleID="k8s-pod-network.9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.774257 containerd[1979]: 2025-09-12 17:44:43.660 [INFO][5275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c89a2a04-b2de-4ae0-95af-29cde938d697", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"calico-apiserver-59f8688b7d-xx7p7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif502e55ab9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:43.774257 containerd[1979]: 2025-09-12 17:44:43.660 [INFO][5275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.197/32] ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.774257 containerd[1979]: 2025-09-12 17:44:43.660 [INFO][5275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif502e55ab9a ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.774257 containerd[1979]: 2025-09-12 17:44:43.691 [INFO][5275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.774257 containerd[1979]: 2025-09-12 17:44:43.693 [INFO][5275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c89a2a04-b2de-4ae0-95af-29cde938d697", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56", Pod:"calico-apiserver-59f8688b7d-xx7p7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif502e55ab9a", MAC:"8a:94:3d:2e:71:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:43.774257 containerd[1979]: 2025-09-12 17:44:43.745 [INFO][5275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-xx7p7" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:43.850282 systemd-networkd[1820]: cali1e001a9c0b5: Link UP Sep 12 17:44:43.857267 systemd-networkd[1820]: cali1e001a9c0b5: Gained carrier Sep 12 17:44:43.885853 systemd[1]: Started cri-containerd-4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343.scope - libcontainer container 4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343. Sep 12 17:44:43.890696 containerd[1979]: time="2025-09-12T17:44:43.885163749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:43.890696 containerd[1979]: time="2025-09-12T17:44:43.886086843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:43.890696 containerd[1979]: time="2025-09-12T17:44:43.886178287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:43.890696 containerd[1979]: time="2025-09-12T17:44:43.887123617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:43.951718 systemd[1]: Started cri-containerd-9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56.scope - libcontainer container 9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56. Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.363 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0 goldmane-7988f88666- calico-system f3d2640f-4a08-48ab-aceb-a50afe09f769 998 0 2025-09-12 17:44:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-238 goldmane-7988f88666-wm2jb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1e001a9c0b5 [] [] }} ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.367 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.563 [INFO][5310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" HandleID="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.563 [INFO][5310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" HandleID="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312660), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-238", "pod":"goldmane-7988f88666-wm2jb", "timestamp":"2025-09-12 17:44:43.563383072 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.563 [INFO][5310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.645 [INFO][5310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.645 [INFO][5310] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.689 [INFO][5310] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.708 [INFO][5310] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.726 [INFO][5310] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.735 [INFO][5310] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.749 [INFO][5310] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.749 [INFO][5310] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.758 [INFO][5310] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707 Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.777 [INFO][5310] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.804 [INFO][5310] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.198/26] block=192.168.96.192/26 handle="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.804 [INFO][5310] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.198/26] handle="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" host="ip-172-31-28-238" Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.804 [INFO][5310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:43.982429 containerd[1979]: 2025-09-12 17:44:43.804 [INFO][5310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.198/26] IPv6=[] ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" HandleID="k8s-pod-network.28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.983451 containerd[1979]: 2025-09-12 17:44:43.832 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f3d2640f-4a08-48ab-aceb-a50afe09f769", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"goldmane-7988f88666-wm2jb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e001a9c0b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:43.983451 containerd[1979]: 2025-09-12 17:44:43.833 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.198/32] ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.983451 containerd[1979]: 2025-09-12 17:44:43.833 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e001a9c0b5 ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.983451 containerd[1979]: 2025-09-12 17:44:43.887 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:43.983451 containerd[1979]: 2025-09-12 17:44:43.892 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f3d2640f-4a08-48ab-aceb-a50afe09f769", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707", Pod:"goldmane-7988f88666-wm2jb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e001a9c0b5", MAC:"62:4c:c8:4e:15:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:43.983451 containerd[1979]: 2025-09-12 17:44:43.953 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707" Namespace="calico-system" Pod="goldmane-7988f88666-wm2jb" WorkloadEndpoint="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:44.073181 containerd[1979]: time="2025-09-12T17:44:44.073132584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xd7b8,Uid:9ec97501-8553-450b-a6ff-0411fa4b5fba,Namespace:kube-system,Attempt:1,} returns sandbox id \"4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343\"" Sep 12 17:44:44.094316 containerd[1979]: time="2025-09-12T17:44:44.093847480Z" level=info msg="CreateContainer within sandbox \"4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:44:44.104879 containerd[1979]: time="2025-09-12T17:44:44.101694836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:44.104879 containerd[1979]: time="2025-09-12T17:44:44.101782889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:44.104879 containerd[1979]: time="2025-09-12T17:44:44.101806575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:44.104879 containerd[1979]: time="2025-09-12T17:44:44.101922795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:44.191817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807622605.mount: Deactivated successfully. Sep 12 17:44:44.203842 containerd[1979]: time="2025-09-12T17:44:44.203624459Z" level=info msg="CreateContainer within sandbox \"4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d3c8ad9dd8a39af70631f4d6a562c8c3c207bb82eba524669722315c1d40a4e\"" Sep 12 17:44:44.210624 containerd[1979]: time="2025-09-12T17:44:44.209099631Z" level=info msg="StartContainer for \"0d3c8ad9dd8a39af70631f4d6a562c8c3c207bb82eba524669722315c1d40a4e\"" Sep 12 17:44:44.225521 systemd[1]: Started cri-containerd-28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707.scope - libcontainer container 28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707. Sep 12 17:44:44.290377 systemd[1]: Started cri-containerd-0d3c8ad9dd8a39af70631f4d6a562c8c3c207bb82eba524669722315c1d40a4e.scope - libcontainer container 0d3c8ad9dd8a39af70631f4d6a562c8c3c207bb82eba524669722315c1d40a4e. Sep 12 17:44:44.370754 containerd[1979]: time="2025-09-12T17:44:44.368995611Z" level=info msg="StartContainer for \"0d3c8ad9dd8a39af70631f4d6a562c8c3c207bb82eba524669722315c1d40a4e\" returns successfully" Sep 12 17:44:44.452343 containerd[1979]: time="2025-09-12T17:44:44.452246344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-xx7p7,Uid:c89a2a04-b2de-4ae0-95af-29cde938d697,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56\"" Sep 12 17:44:44.503738 containerd[1979]: time="2025-09-12T17:44:44.502702987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wm2jb,Uid:f3d2640f-4a08-48ab-aceb-a50afe09f769,Namespace:calico-system,Attempt:1,} returns sandbox id \"28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707\"" Sep 12 17:44:44.507208 kubelet[3173]: I0912 17:44:44.506970 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xd7b8" podStartSLOduration=44.506942374 podStartE2EDuration="44.506942374s" podCreationTimestamp="2025-09-12 17:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:44:44.496588767 +0000 UTC m=+51.059227975" watchObservedRunningTime="2025-09-12 17:44:44.506942374 +0000 UTC m=+51.069581580" Sep 12 17:44:44.565412 containerd[1979]: time="2025-09-12T17:44:44.564541496Z" level=info msg="StopPodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\"" Sep 12 17:44:44.573706 containerd[1979]: time="2025-09-12T17:44:44.573493417Z" level=info msg="StopPodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\"" Sep 12 17:44:44.588271 containerd[1979]: time="2025-09-12T17:44:44.588205753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:44.593318 containerd[1979]: time="2025-09-12T17:44:44.592251536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:44:44.602162 containerd[1979]: time="2025-09-12T17:44:44.600000034Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:44.616081 containerd[1979]: time="2025-09-12T17:44:44.615554819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:44.623853 containerd[1979]: time="2025-09-12T17:44:44.623697857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.532285738s" Sep 12 17:44:44.624406 containerd[1979]: time="2025-09-12T17:44:44.624077294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:44:44.627504 containerd[1979]: time="2025-09-12T17:44:44.627277419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:44:44.628624 containerd[1979]: time="2025-09-12T17:44:44.628434413Z" level=info msg="CreateContainer within sandbox \"5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:44:44.668887 containerd[1979]: time="2025-09-12T17:44:44.668836937Z" level=info msg="CreateContainer within sandbox \"5753583c2be3fa03f38fc1fe3870323b18ac644b1957cd77ad462e2ea61ef9d2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"240789ccf9369eed272e32bcf76789b938eba5295091ddbe8f13cf23f468c1a3\"" Sep 12 17:44:44.673661 containerd[1979]: time="2025-09-12T17:44:44.671898586Z" level=info msg="StartContainer for \"240789ccf9369eed272e32bcf76789b938eba5295091ddbe8f13cf23f468c1a3\"" Sep 12 17:44:44.776458 systemd[1]: Started cri-containerd-240789ccf9369eed272e32bcf76789b938eba5295091ddbe8f13cf23f468c1a3.scope - libcontainer container 240789ccf9369eed272e32bcf76789b938eba5295091ddbe8f13cf23f468c1a3. Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.785 [INFO][5552] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.785 [INFO][5552] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" iface="eth0" netns="/var/run/netns/cni-739ca9ef-95c5-f2ed-6c74-be7599b496c1" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.786 [INFO][5552] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" iface="eth0" netns="/var/run/netns/cni-739ca9ef-95c5-f2ed-6c74-be7599b496c1" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.789 [INFO][5552] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" iface="eth0" netns="/var/run/netns/cni-739ca9ef-95c5-f2ed-6c74-be7599b496c1" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.789 [INFO][5552] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.789 [INFO][5552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.855 [INFO][5583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.855 [INFO][5583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.855 [INFO][5583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.867 [WARNING][5583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.868 [INFO][5583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.874 [INFO][5583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:44.892255 containerd[1979]: 2025-09-12 17:44:44.883 [INFO][5552] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:44.894720 containerd[1979]: time="2025-09-12T17:44:44.893981308Z" level=info msg="TearDown network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" successfully" Sep 12 17:44:44.894720 containerd[1979]: time="2025-09-12T17:44:44.894073536Z" level=info msg="StopPodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" returns successfully" Sep 12 17:44:44.897045 containerd[1979]: time="2025-09-12T17:44:44.895271629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-vc7r8,Uid:d4dc518c-dc79-425e-b59d-e5f73f27e330,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.775 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.776 [INFO][5542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" iface="eth0" netns="/var/run/netns/cni-c481282b-33a0-f883-9984-c00075486f74" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.782 [INFO][5542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" iface="eth0" netns="/var/run/netns/cni-c481282b-33a0-f883-9984-c00075486f74" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.784 [INFO][5542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" iface="eth0" netns="/var/run/netns/cni-c481282b-33a0-f883-9984-c00075486f74" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.784 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.786 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.881 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.884 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.884 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.904 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.904 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.906 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:44.917448 containerd[1979]: 2025-09-12 17:44:44.911 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:44.918423 containerd[1979]: time="2025-09-12T17:44:44.918380145Z" level=info msg="TearDown network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" successfully" Sep 12 17:44:44.918423 containerd[1979]: time="2025-09-12T17:44:44.918417382Z" level=info msg="StopPodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" returns successfully" Sep 12 17:44:44.919730 containerd[1979]: time="2025-09-12T17:44:44.919284982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88b68d46-gcsq2,Uid:ae71cbe7-247e-4add-9e62-3d52dac5dc6c,Namespace:calico-system,Attempt:1,}" Sep 12 17:44:44.928648 systemd-networkd[1820]: cali4add7208d21: Gained IPv6LL Sep 12 17:44:44.929068 systemd-networkd[1820]: calif502e55ab9a: Gained IPv6LL Sep 12 17:44:44.955613 containerd[1979]: time="2025-09-12T17:44:44.955567721Z" level=info msg="StartContainer for \"240789ccf9369eed272e32bcf76789b938eba5295091ddbe8f13cf23f468c1a3\" returns successfully" Sep 12 17:44:45.025804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090494128.mount: Deactivated successfully. Sep 12 17:44:45.025960 systemd[1]: run-netns-cni\x2dc481282b\x2d33a0\x2df883\x2d9984\x2dc00075486f74.mount: Deactivated successfully. Sep 12 17:44:45.030123 systemd[1]: run-netns-cni\x2d739ca9ef\x2d95c5\x2df2ed\x2d6c74\x2dbe7599b496c1.mount: Deactivated successfully. Sep 12 17:44:45.306884 systemd-networkd[1820]: cali5ccee6f9107: Link UP Sep 12 17:44:45.309160 systemd-networkd[1820]: cali5ccee6f9107: Gained carrier Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.131 [INFO][5617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0 calico-kube-controllers-88b68d46- calico-system ae71cbe7-247e-4add-9e62-3d52dac5dc6c 1034 0 2025-09-12 17:44:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:88b68d46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-238 calico-kube-controllers-88b68d46-gcsq2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5ccee6f9107 [] [] }} ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.131 [INFO][5617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.225 [INFO][5638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" HandleID="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.225 [INFO][5638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" HandleID="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-238", "pod":"calico-kube-controllers-88b68d46-gcsq2", "timestamp":"2025-09-12 17:44:45.225126598 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.226 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.226 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.226 [INFO][5638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.244 [INFO][5638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.256 [INFO][5638] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.264 [INFO][5638] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.267 [INFO][5638] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.271 [INFO][5638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.271 [INFO][5638] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.274 [INFO][5638] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.283 [INFO][5638] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.295 [INFO][5638] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.199/26] block=192.168.96.192/26 handle="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.295 [INFO][5638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.199/26] handle="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" host="ip-172-31-28-238" Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.295 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:45.341126 containerd[1979]: 2025-09-12 17:44:45.295 [INFO][5638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.199/26] IPv6=[] ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" HandleID="k8s-pod-network.1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.342752 containerd[1979]: 2025-09-12 17:44:45.301 [INFO][5617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0", GenerateName:"calico-kube-controllers-88b68d46-", Namespace:"calico-system", SelfLink:"", UID:"ae71cbe7-247e-4add-9e62-3d52dac5dc6c", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88b68d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"calico-kube-controllers-88b68d46-gcsq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccee6f9107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:45.342752 containerd[1979]: 2025-09-12 17:44:45.301 [INFO][5617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.199/32] ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.342752 containerd[1979]: 2025-09-12 17:44:45.301 [INFO][5617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ccee6f9107 ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.342752 containerd[1979]: 2025-09-12 17:44:45.312 [INFO][5617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.342752 containerd[1979]: 2025-09-12 17:44:45.315 [INFO][5617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0", GenerateName:"calico-kube-controllers-88b68d46-", Namespace:"calico-system", SelfLink:"", UID:"ae71cbe7-247e-4add-9e62-3d52dac5dc6c", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88b68d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f", Pod:"calico-kube-controllers-88b68d46-gcsq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccee6f9107", MAC:"3e:12:61:80:42:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:45.342752 containerd[1979]: 2025-09-12 17:44:45.335 [INFO][5617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f" Namespace="calico-system" Pod="calico-kube-controllers-88b68d46-gcsq2" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:45.396135 containerd[1979]: time="2025-09-12T17:44:45.395060194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:45.396502 containerd[1979]: time="2025-09-12T17:44:45.396380675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:45.397080 containerd[1979]: time="2025-09-12T17:44:45.396605413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:45.398433 containerd[1979]: time="2025-09-12T17:44:45.398297497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:45.419921 systemd-networkd[1820]: cali3d87c54580f: Link UP Sep 12 17:44:45.427678 systemd-networkd[1820]: cali3d87c54580f: Gained carrier Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.152 [INFO][5601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0 calico-apiserver-59f8688b7d- calico-apiserver d4dc518c-dc79-425e-b59d-e5f73f27e330 1035 0 2025-09-12 17:44:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f8688b7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-238 calico-apiserver-59f8688b7d-vc7r8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3d87c54580f [] [] }} ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.152 [INFO][5601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.245 [INFO][5643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" HandleID="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.246 [INFO][5643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" HandleID="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-238", "pod":"calico-apiserver-59f8688b7d-vc7r8", "timestamp":"2025-09-12 17:44:45.245775518 +0000 UTC"}, Hostname:"ip-172-31-28-238", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.247 [INFO][5643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.296 [INFO][5643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.296 [INFO][5643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-238' Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.347 [INFO][5643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.358 [INFO][5643] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.366 [INFO][5643] ipam/ipam.go 511: Trying affinity for 192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.369 [INFO][5643] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.373 [INFO][5643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.373 [INFO][5643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.377 [INFO][5643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.388 [INFO][5643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.410 [INFO][5643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.200/26] block=192.168.96.192/26 handle="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.411 [INFO][5643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.200/26] handle="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" host="ip-172-31-28-238" Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.411 [INFO][5643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:45.478139 containerd[1979]: 2025-09-12 17:44:45.411 [INFO][5643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.200/26] IPv6=[] ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" HandleID="k8s-pod-network.352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.480265 containerd[1979]: 2025-09-12 17:44:45.415 [INFO][5601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4dc518c-dc79-425e-b59d-e5f73f27e330", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"", Pod:"calico-apiserver-59f8688b7d-vc7r8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d87c54580f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:45.480265 containerd[1979]: 2025-09-12 17:44:45.415 [INFO][5601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.200/32] ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.480265 containerd[1979]: 2025-09-12 17:44:45.415 [INFO][5601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d87c54580f ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.480265 containerd[1979]: 2025-09-12 17:44:45.432 [INFO][5601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.480265 containerd[1979]: 2025-09-12 17:44:45.436 [INFO][5601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4dc518c-dc79-425e-b59d-e5f73f27e330", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d", Pod:"calico-apiserver-59f8688b7d-vc7r8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d87c54580f", MAC:"5a:3e:c9:b4:de:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:45.480265 containerd[1979]: 2025-09-12 17:44:45.463 [INFO][5601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d" Namespace="calico-apiserver" Pod="calico-apiserver-59f8688b7d-vc7r8" WorkloadEndpoint="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:45.480798 systemd[1]: Started cri-containerd-1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f.scope - libcontainer container 1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f. Sep 12 17:44:45.506100 kubelet[3173]: I0912 17:44:45.505915 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54b95dd557-g7qqj" podStartSLOduration=2.25125676 podStartE2EDuration="8.505891412s" podCreationTimestamp="2025-09-12 17:44:37 +0000 UTC" firstStartedPulling="2025-09-12 17:44:38.370660711 +0000 UTC m=+44.933299898" lastFinishedPulling="2025-09-12 17:44:44.62529535 +0000 UTC m=+51.187934550" observedRunningTime="2025-09-12 17:44:45.50561652 +0000 UTC m=+52.068255726" watchObservedRunningTime="2025-09-12 17:44:45.505891412 +0000 UTC m=+52.068530618" Sep 12 17:44:45.595469 containerd[1979]: time="2025-09-12T17:44:45.595114923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:44:45.595878 containerd[1979]: time="2025-09-12T17:44:45.595370180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:44:45.595878 containerd[1979]: time="2025-09-12T17:44:45.595692337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:45.596600 containerd[1979]: time="2025-09-12T17:44:45.596357694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:44:45.633354 systemd-networkd[1820]: cali1e001a9c0b5: Gained IPv6LL Sep 12 17:44:45.651009 systemd[1]: Started cri-containerd-352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d.scope - libcontainer container 352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d. Sep 12 17:44:45.724186 containerd[1979]: time="2025-09-12T17:44:45.724128713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-88b68d46-gcsq2,Uid:ae71cbe7-247e-4add-9e62-3d52dac5dc6c,Namespace:calico-system,Attempt:1,} returns sandbox id \"1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f\"" Sep 12 17:44:45.751809 containerd[1979]: time="2025-09-12T17:44:45.751759090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f8688b7d-vc7r8,Uid:d4dc518c-dc79-425e-b59d-e5f73f27e330,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d\"" Sep 12 17:44:46.510354 containerd[1979]: time="2025-09-12T17:44:46.510308012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:46.513066 containerd[1979]: time="2025-09-12T17:44:46.512988092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:44:46.517045 containerd[1979]: time="2025-09-12T17:44:46.515174605Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:46.520730 containerd[1979]: time="2025-09-12T17:44:46.520685809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:46.522202 containerd[1979]: time="2025-09-12T17:44:46.522165069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.894841318s" Sep 12 17:44:46.524042 containerd[1979]: time="2025-09-12T17:44:46.523989508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:44:46.526332 containerd[1979]: time="2025-09-12T17:44:46.526304139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:44:46.528865 containerd[1979]: time="2025-09-12T17:44:46.528823279Z" level=info msg="CreateContainer within sandbox \"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:44:46.574229 containerd[1979]: time="2025-09-12T17:44:46.574184754Z" level=info msg="CreateContainer within sandbox \"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"38d19afc5dd81a9dece4d58d90cad4ee13c843dfe00ce39f9c74d497df0ab217\"" Sep 12 17:44:46.575609 containerd[1979]: time="2025-09-12T17:44:46.575265241Z" level=info msg="StartContainer for \"38d19afc5dd81a9dece4d58d90cad4ee13c843dfe00ce39f9c74d497df0ab217\"" Sep 12 17:44:46.642565 systemd[1]: Started cri-containerd-38d19afc5dd81a9dece4d58d90cad4ee13c843dfe00ce39f9c74d497df0ab217.scope - libcontainer container 38d19afc5dd81a9dece4d58d90cad4ee13c843dfe00ce39f9c74d497df0ab217. Sep 12 17:44:46.713717 containerd[1979]: time="2025-09-12T17:44:46.713524582Z" level=info msg="StartContainer for \"38d19afc5dd81a9dece4d58d90cad4ee13c843dfe00ce39f9c74d497df0ab217\" returns successfully" Sep 12 17:44:47.233555 systemd-networkd[1820]: cali5ccee6f9107: Gained IPv6LL Sep 12 17:44:47.296354 systemd-networkd[1820]: cali3d87c54580f: Gained IPv6LL Sep 12 17:44:48.422713 systemd[1]: Started sshd@8-172.31.28.238:22-147.75.109.163:56494.service - OpenSSH per-connection server daemon (147.75.109.163:56494). Sep 12 17:44:48.649092 sshd[5796]: Accepted publickey for core from 147.75.109.163 port 56494 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:44:48.657743 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:48.664094 systemd-logind[1965]: New session 9 of user core. Sep 12 17:44:48.670337 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:44:49.236686 sshd[5796]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:49.242579 systemd-logind[1965]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:44:49.242978 systemd[1]: sshd@8-172.31.28.238:22-147.75.109.163:56494.service: Deactivated successfully. Sep 12 17:44:49.245269 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:44:49.246721 systemd-logind[1965]: Removed session 9. Sep 12 17:44:50.014269 ntpd[1957]: Listen normally on 7 vxlan.calico 192.168.96.192:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 7 vxlan.calico 192.168.96.192:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 8 cali5024fb81ed9 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 9 vxlan.calico [fe80::649d:9ff:fe5c:95db%5]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 10 calic3393ae3394 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 11 calie1f997fe864 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 12 cali4add7208d21 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 13 calif502e55ab9a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 14 cali1e001a9c0b5 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 15 cali5ccee6f9107 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:44:50.015912 ntpd[1957]: 12 Sep 17:44:50 ntpd[1957]: Listen normally on 16 cali3d87c54580f [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:44:50.014368 ntpd[1957]: Listen normally on 8 cali5024fb81ed9 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:44:50.014427 ntpd[1957]: Listen normally on 9 vxlan.calico [fe80::649d:9ff:fe5c:95db%5]:123 Sep 12 17:44:50.014469 ntpd[1957]: Listen normally on 10 calic3393ae3394 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:44:50.014510 ntpd[1957]: Listen normally on 11 calie1f997fe864 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:44:50.014550 ntpd[1957]: Listen normally on 12 cali4add7208d21 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:44:50.014589 ntpd[1957]: Listen normally on 13 calif502e55ab9a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:44:50.014617 ntpd[1957]: Listen normally on 14 cali1e001a9c0b5 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:44:50.014652 ntpd[1957]: Listen normally on 15 cali5ccee6f9107 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:44:50.014687 ntpd[1957]: Listen normally on 16 cali3d87c54580f [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:44:50.953969 containerd[1979]: time="2025-09-12T17:44:50.953902292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:50.955972 containerd[1979]: time="2025-09-12T17:44:50.955875701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:44:50.959059 containerd[1979]: time="2025-09-12T17:44:50.958125090Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:50.962120 containerd[1979]: time="2025-09-12T17:44:50.962049034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:50.963008 containerd[1979]: time="2025-09-12T17:44:50.962828802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.436339824s" Sep 12 17:44:50.963008 containerd[1979]: time="2025-09-12T17:44:50.962873110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:44:50.998270 containerd[1979]: time="2025-09-12T17:44:50.998000587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:44:51.000431 containerd[1979]: time="2025-09-12T17:44:51.000387650Z" level=info msg="CreateContainer within sandbox \"9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:44:51.059277 containerd[1979]: time="2025-09-12T17:44:51.059133963Z" level=info msg="CreateContainer within sandbox \"9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0082385070ce67aaee37736958b08d450c27e4472df6eb8f36d4c4c6f3200fe7\"" Sep 12 17:44:51.061073 containerd[1979]: time="2025-09-12T17:44:51.060011343Z" level=info msg="StartContainer for \"0082385070ce67aaee37736958b08d450c27e4472df6eb8f36d4c4c6f3200fe7\"" Sep 12 17:44:51.153287 systemd[1]: Started cri-containerd-0082385070ce67aaee37736958b08d450c27e4472df6eb8f36d4c4c6f3200fe7.scope - libcontainer container 0082385070ce67aaee37736958b08d450c27e4472df6eb8f36d4c4c6f3200fe7. Sep 12 17:44:51.208890 containerd[1979]: time="2025-09-12T17:44:51.208717391Z" level=info msg="StartContainer for \"0082385070ce67aaee37736958b08d450c27e4472df6eb8f36d4c4c6f3200fe7\" returns successfully" Sep 12 17:44:52.351792 kubelet[3173]: I0912 17:44:52.350674 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f8688b7d-xx7p7" podStartSLOduration=32.809317206 podStartE2EDuration="39.350650976s" podCreationTimestamp="2025-09-12 17:44:13 +0000 UTC" firstStartedPulling="2025-09-12 17:44:44.45611492 +0000 UTC m=+51.018754118" lastFinishedPulling="2025-09-12 17:44:50.997448701 +0000 UTC m=+57.560087888" observedRunningTime="2025-09-12 17:44:51.55571395 +0000 UTC m=+58.118353157" watchObservedRunningTime="2025-09-12 17:44:52.350650976 +0000 UTC m=+58.913290179" Sep 12 17:44:53.860467 containerd[1979]: time="2025-09-12T17:44:53.860417999Z" level=info msg="StopPodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\"" Sep 12 17:44:54.288277 systemd[1]: Started sshd@9-172.31.28.238:22-147.75.109.163:60702.service - OpenSSH per-connection server daemon (147.75.109.163:60702). Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.321 [WARNING][5878] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ee7b516d-8c24-4672-9d6b-95b402198b55", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b", Pod:"csi-node-driver-b4wvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f997fe864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.333 [INFO][5878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.333 [INFO][5878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" iface="eth0" netns="" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.333 [INFO][5878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.333 [INFO][5878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.393 [INFO][5887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.394 [INFO][5887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.394 [INFO][5887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.401 [WARNING][5887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.401 [INFO][5887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.403 [INFO][5887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:54.408523 containerd[1979]: 2025-09-12 17:44:54.406 [INFO][5878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:54.419790 containerd[1979]: time="2025-09-12T17:44:54.419729973Z" level=info msg="TearDown network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" successfully" Sep 12 17:44:54.419790 containerd[1979]: time="2025-09-12T17:44:54.419779629Z" level=info msg="StopPodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" returns successfully" Sep 12 17:44:54.557991 sshd[5885]: Accepted publickey for core from 147.75.109.163 port 60702 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:44:54.561304 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:54.574915 systemd-logind[1965]: New session 10 of user core. Sep 12 17:44:54.582203 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:44:54.627062 containerd[1979]: time="2025-09-12T17:44:54.626852721Z" level=info msg="RemovePodSandbox for \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\"" Sep 12 17:44:54.632851 containerd[1979]: time="2025-09-12T17:44:54.632706797Z" level=info msg="Forcibly stopping sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\"" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.770 [WARNING][5909] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ee7b516d-8c24-4672-9d6b-95b402198b55", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b", Pod:"csi-node-driver-b4wvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1f997fe864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.770 [INFO][5909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.770 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" iface="eth0" netns="" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.770 [INFO][5909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.770 [INFO][5909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.937 [INFO][5921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.941 [INFO][5921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.941 [INFO][5921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.989 [WARNING][5921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.989 [INFO][5921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" HandleID="k8s-pod-network.ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Workload="ip--172--31--28--238-k8s-csi--node--driver--b4wvz-eth0" Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:54.996 [INFO][5921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:55.019423 containerd[1979]: 2025-09-12 17:44:55.008 [INFO][5909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773" Sep 12 17:44:55.026357 containerd[1979]: time="2025-09-12T17:44:55.022618434Z" level=info msg="TearDown network for sandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" successfully" Sep 12 17:44:55.069533 containerd[1979]: time="2025-09-12T17:44:55.069460200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:55.168856 containerd[1979]: time="2025-09-12T17:44:55.168739778Z" level=info msg="RemovePodSandbox \"ac4ce8eada9969af69ee63cdbb041f5ecfd09d5698e9f872efacf364a64db773\" returns successfully" Sep 12 17:44:55.216008 containerd[1979]: time="2025-09-12T17:44:55.214922574Z" level=info msg="StopPodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\"" Sep 12 17:44:55.487625 systemd[1]: run-containerd-runc-k8s.io-1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc-runc.BqpH35.mount: Deactivated successfully. Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.614 [WARNING][5941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9ec97501-8553-450b-a6ff-0411fa4b5fba", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343", Pod:"coredns-7c65d6cfc9-xd7b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4add7208d21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.614 [INFO][5941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.615 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" iface="eth0" netns="" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.618 [INFO][5941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.619 [INFO][5941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.811 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.811 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.812 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.833 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.833 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.837 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:55.863663 containerd[1979]: 2025-09-12 17:44:55.847 [INFO][5941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:55.863663 containerd[1979]: time="2025-09-12T17:44:55.862593816Z" level=info msg="TearDown network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" successfully" Sep 12 17:44:55.863663 containerd[1979]: time="2025-09-12T17:44:55.862625601Z" level=info msg="StopPodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" returns successfully" Sep 12 17:44:55.887924 containerd[1979]: time="2025-09-12T17:44:55.887472332Z" level=info msg="RemovePodSandbox for \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\"" Sep 12 17:44:55.887924 containerd[1979]: time="2025-09-12T17:44:55.887532631Z" level=info msg="Forcibly stopping sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\"" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.060 [WARNING][5984] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9ec97501-8553-450b-a6ff-0411fa4b5fba", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"4cdc9d6c6dde0a950784006b167ce79d5b4f5be6dc669ffeb42ee17985edd343", Pod:"coredns-7c65d6cfc9-xd7b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4add7208d21", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.060 [INFO][5984] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.060 [INFO][5984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" iface="eth0" netns="" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.060 [INFO][5984] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.060 [INFO][5984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.180 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.183 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.183 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.198 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.198 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" HandleID="k8s-pod-network.a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--xd7b8-eth0" Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.202 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:56.219697 containerd[1979]: 2025-09-12 17:44:56.212 [INFO][5984] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d" Sep 12 17:44:56.222966 containerd[1979]: time="2025-09-12T17:44:56.219696564Z" level=info msg="TearDown network for sandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" successfully" Sep 12 17:44:56.281700 containerd[1979]: time="2025-09-12T17:44:56.281505043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:56.282330 containerd[1979]: time="2025-09-12T17:44:56.281935887Z" level=info msg="RemovePodSandbox \"a991e53c15f2a12ac8820dac044cab5d6319d489cbd0d40f426a5ebd2ca4449d\" returns successfully" Sep 12 17:44:56.327042 containerd[1979]: time="2025-09-12T17:44:56.326978639Z" level=info msg="StopPodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\"" Sep 12 17:44:56.380329 sshd[5885]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:56.389162 systemd[1]: sshd@9-172.31.28.238:22-147.75.109.163:60702.service: Deactivated successfully. Sep 12 17:44:56.396167 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:44:56.413466 systemd-logind[1965]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:44:56.434948 systemd[1]: Started sshd@10-172.31.28.238:22-147.75.109.163:60712.service - OpenSSH per-connection server daemon (147.75.109.163:60712). Sep 12 17:44:56.439311 systemd-logind[1965]: Removed session 10. Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.530 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0", GenerateName:"calico-kube-controllers-88b68d46-", Namespace:"calico-system", SelfLink:"", UID:"ae71cbe7-247e-4add-9e62-3d52dac5dc6c", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88b68d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f", Pod:"calico-kube-controllers-88b68d46-gcsq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccee6f9107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.531 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.531 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" iface="eth0" netns="" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.531 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.531 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.615 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.616 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.618 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.628 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.628 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.630 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:56.637790 containerd[1979]: 2025-09-12 17:44:56.634 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:56.639480 containerd[1979]: time="2025-09-12T17:44:56.638595162Z" level=info msg="TearDown network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" successfully" Sep 12 17:44:56.639480 containerd[1979]: time="2025-09-12T17:44:56.638635196Z" level=info msg="StopPodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" returns successfully" Sep 12 17:44:56.685905 sshd[6017]: Accepted publickey for core from 147.75.109.163 port 60712 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:44:56.688476 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:56.698978 systemd-logind[1965]: New session 11 of user core. Sep 12 17:44:56.702575 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:44:56.735883 containerd[1979]: time="2025-09-12T17:44:56.735430898Z" level=info msg="RemovePodSandbox for \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\"" Sep 12 17:44:56.735883 containerd[1979]: time="2025-09-12T17:44:56.735479074Z" level=info msg="Forcibly stopping sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\"" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:56.873 [WARNING][6037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0", GenerateName:"calico-kube-controllers-88b68d46-", Namespace:"calico-system", SelfLink:"", UID:"ae71cbe7-247e-4add-9e62-3d52dac5dc6c", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"88b68d46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f", Pod:"calico-kube-controllers-88b68d46-gcsq2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ccee6f9107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:56.876 [INFO][6037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:56.876 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" iface="eth0" netns="" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:56.876 [INFO][6037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:56.876 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.021 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.021 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.021 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.040 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.040 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" HandleID="k8s-pod-network.cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Workload="ip--172--31--28--238-k8s-calico--kube--controllers--88b68d46--gcsq2-eth0" Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.046 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:57.068341 containerd[1979]: 2025-09-12 17:44:57.061 [INFO][6037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a" Sep 12 17:44:57.070430 containerd[1979]: time="2025-09-12T17:44:57.069284212Z" level=info msg="TearDown network for sandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" successfully" Sep 12 17:44:57.195943 containerd[1979]: time="2025-09-12T17:44:57.195380936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:57.195943 containerd[1979]: time="2025-09-12T17:44:57.195491072Z" level=info msg="RemovePodSandbox \"cbb0b20de73f459630fd7d00a3370718ed189f1f7697659275eaed351e5f641a\" returns successfully" Sep 12 17:44:57.399543 sshd[6017]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:57.432492 systemd[1]: sshd@10-172.31.28.238:22-147.75.109.163:60712.service: Deactivated successfully. Sep 12 17:44:57.438920 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:44:57.441298 systemd-logind[1965]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:44:57.451645 systemd[1]: Started sshd@11-172.31.28.238:22-147.75.109.163:60726.service - OpenSSH per-connection server daemon (147.75.109.163:60726). Sep 12 17:44:57.457952 systemd-logind[1965]: Removed session 11. Sep 12 17:44:57.464138 containerd[1979]: time="2025-09-12T17:44:57.463972077Z" level=info msg="StopPodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\"" Sep 12 17:44:57.716853 sshd[6058]: Accepted publickey for core from 147.75.109.163 port 60726 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:44:57.720831 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:57.740114 systemd-logind[1965]: New session 12 of user core. Sep 12 17:44:57.745275 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.658 [WARNING][6067] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.659 [INFO][6067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.659 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" iface="eth0" netns="" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.659 [INFO][6067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.659 [INFO][6067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.715 [INFO][6075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.715 [INFO][6075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.715 [INFO][6075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.734 [WARNING][6075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.734 [INFO][6075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.742 [INFO][6075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:57.760704 containerd[1979]: 2025-09-12 17:44:57.752 [INFO][6067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:57.760704 containerd[1979]: time="2025-09-12T17:44:57.760564512Z" level=info msg="TearDown network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" successfully" Sep 12 17:44:57.760704 containerd[1979]: time="2025-09-12T17:44:57.760593649Z" level=info msg="StopPodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" returns successfully" Sep 12 17:44:57.820500 containerd[1979]: time="2025-09-12T17:44:57.819901015Z" level=info msg="RemovePodSandbox for \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\"" Sep 12 17:44:57.820500 containerd[1979]: time="2025-09-12T17:44:57.819950113Z" level=info msg="Forcibly stopping sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\"" Sep 12 17:44:57.904419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443362756.mount: Deactivated successfully. Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:57.920 [WARNING][6090] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" WorkloadEndpoint="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:57.921 [INFO][6090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:57.921 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" iface="eth0" netns="" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:57.921 [INFO][6090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:57.921 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.017 [INFO][6100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.017 [INFO][6100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.017 [INFO][6100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.048 [WARNING][6100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.048 [INFO][6100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" HandleID="k8s-pod-network.a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Workload="ip--172--31--28--238-k8s-whisker--7d78848566--lvdjs-eth0" Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.055 [INFO][6100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:58.071375 containerd[1979]: 2025-09-12 17:44:58.062 [INFO][6090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84" Sep 12 17:44:58.072075 containerd[1979]: time="2025-09-12T17:44:58.069404372Z" level=info msg="TearDown network for sandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" successfully" Sep 12 17:44:58.083213 containerd[1979]: time="2025-09-12T17:44:58.083163777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:58.083821 containerd[1979]: time="2025-09-12T17:44:58.083789791Z" level=info msg="RemovePodSandbox \"a1ea614dc0628da8cf0dab5f79a5c0af2c0727dad208c2065d81f8a599d62d84\" returns successfully" Sep 12 17:44:58.099973 containerd[1979]: time="2025-09-12T17:44:58.099924992Z" level=info msg="StopPodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\"" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.171 [WARNING][6122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f3d2640f-4a08-48ab-aceb-a50afe09f769", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707", Pod:"goldmane-7988f88666-wm2jb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e001a9c0b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.172 [INFO][6122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.172 [INFO][6122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" iface="eth0" netns="" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.172 [INFO][6122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.172 [INFO][6122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.228 [INFO][6129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.228 [INFO][6129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.229 [INFO][6129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.255 [WARNING][6129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.255 [INFO][6129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.260 [INFO][6129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:58.279431 containerd[1979]: 2025-09-12 17:44:58.271 [INFO][6122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.282326 containerd[1979]: time="2025-09-12T17:44:58.279932047Z" level=info msg="TearDown network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" successfully" Sep 12 17:44:58.282326 containerd[1979]: time="2025-09-12T17:44:58.279971183Z" level=info msg="StopPodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" returns successfully" Sep 12 17:44:58.284894 containerd[1979]: time="2025-09-12T17:44:58.284434846Z" level=info msg="RemovePodSandbox for \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\"" Sep 12 17:44:58.284894 containerd[1979]: time="2025-09-12T17:44:58.284479431Z" level=info msg="Forcibly stopping sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\"" Sep 12 17:44:58.486281 sshd[6058]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:58.494491 systemd[1]: sshd@11-172.31.28.238:22-147.75.109.163:60726.service: Deactivated successfully. Sep 12 17:44:58.499154 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:44:58.503045 systemd-logind[1965]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:44:58.508781 systemd-logind[1965]: Removed session 12. Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.437 [WARNING][6144] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f3d2640f-4a08-48ab-aceb-a50afe09f769", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707", Pod:"goldmane-7988f88666-wm2jb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e001a9c0b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.439 [INFO][6144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.439 [INFO][6144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" iface="eth0" netns="" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.439 [INFO][6144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.439 [INFO][6144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.528 [INFO][6151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.528 [INFO][6151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.528 [INFO][6151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.538 [WARNING][6151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.538 [INFO][6151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" HandleID="k8s-pod-network.766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Workload="ip--172--31--28--238-k8s-goldmane--7988f88666--wm2jb-eth0" Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.540 [INFO][6151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:58.548326 containerd[1979]: 2025-09-12 17:44:58.543 [INFO][6144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8" Sep 12 17:44:58.548326 containerd[1979]: time="2025-09-12T17:44:58.548247450Z" level=info msg="TearDown network for sandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" successfully" Sep 12 17:44:58.557093 containerd[1979]: time="2025-09-12T17:44:58.556880126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:58.557093 containerd[1979]: time="2025-09-12T17:44:58.556959827Z" level=info msg="RemovePodSandbox \"766a291abd0163fe0d7ab97098cee6403b5d37f97f01d1981f6a63b7b79ce7b8\" returns successfully" Sep 12 17:44:58.559456 containerd[1979]: time="2025-09-12T17:44:58.559015852Z" level=info msg="StopPodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\"" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.632 [WARNING][6167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c89a2a04-b2de-4ae0-95af-29cde938d697", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56", Pod:"calico-apiserver-59f8688b7d-xx7p7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif502e55ab9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.632 [INFO][6167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.632 [INFO][6167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" iface="eth0" netns="" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.633 [INFO][6167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.634 [INFO][6167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.695 [INFO][6174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.695 [INFO][6174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.695 [INFO][6174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.704 [WARNING][6174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.704 [INFO][6174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.708 [INFO][6174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:58.716679 containerd[1979]: 2025-09-12 17:44:58.711 [INFO][6167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.718353 containerd[1979]: time="2025-09-12T17:44:58.716743353Z" level=info msg="TearDown network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" successfully" Sep 12 17:44:58.718353 containerd[1979]: time="2025-09-12T17:44:58.716786433Z" level=info msg="StopPodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" returns successfully" Sep 12 17:44:58.718353 containerd[1979]: time="2025-09-12T17:44:58.717433485Z" level=info msg="RemovePodSandbox for \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\"" Sep 12 17:44:58.718353 containerd[1979]: time="2025-09-12T17:44:58.717466366Z" level=info msg="Forcibly stopping sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\"" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.827 [WARNING][6188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c89a2a04-b2de-4ae0-95af-29cde938d697", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"9c477e92c401bcd421342e98cee564fc6e1b34f02f9f2a622d79436e9c771b56", Pod:"calico-apiserver-59f8688b7d-xx7p7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif502e55ab9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.830 [INFO][6188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.830 [INFO][6188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" iface="eth0" netns="" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.830 [INFO][6188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.830 [INFO][6188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.916 [INFO][6198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.916 [INFO][6198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.916 [INFO][6198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.925 [WARNING][6198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.925 [INFO][6198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" HandleID="k8s-pod-network.1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--xx7p7-eth0" Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.927 [INFO][6198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:58.937054 containerd[1979]: 2025-09-12 17:44:58.931 [INFO][6188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf" Sep 12 17:44:58.937054 containerd[1979]: time="2025-09-12T17:44:58.936949482Z" level=info msg="TearDown network for sandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" successfully" Sep 12 17:44:58.946493 containerd[1979]: time="2025-09-12T17:44:58.946367017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:58.946493 containerd[1979]: time="2025-09-12T17:44:58.946445011Z" level=info msg="RemovePodSandbox \"1b411a961c398a93c0d59da51cc0a63bd9e726c87a8bc6e32449e668cd73abbf\" returns successfully" Sep 12 17:44:58.956177 containerd[1979]: time="2025-09-12T17:44:58.956071466Z" level=info msg="StopPodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\"" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.029 [WARNING][6215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dce169d5-8166-4cc4-9317-2154c59c4245", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50", Pod:"coredns-7c65d6cfc9-rww8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3393ae3394", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.030 [INFO][6215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.030 [INFO][6215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" iface="eth0" netns="" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.030 [INFO][6215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.030 [INFO][6215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.075 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.075 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.076 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.085 [WARNING][6223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.085 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.088 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:59.096697 containerd[1979]: 2025-09-12 17:44:59.091 [INFO][6215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.097814 containerd[1979]: time="2025-09-12T17:44:59.096744450Z" level=info msg="TearDown network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" successfully" Sep 12 17:44:59.097814 containerd[1979]: time="2025-09-12T17:44:59.096994644Z" level=info msg="StopPodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" returns successfully" Sep 12 17:44:59.134831 containerd[1979]: time="2025-09-12T17:44:59.134786487Z" level=info msg="RemovePodSandbox for \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\"" Sep 12 17:44:59.134831 containerd[1979]: time="2025-09-12T17:44:59.134826657Z" level=info msg="Forcibly stopping sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\"" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.202 [WARNING][6237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"dce169d5-8166-4cc4-9317-2154c59c4245", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"6b71c1dad1da488e46fff9afab8393a34e47fb8e6cc468dd692d5f25c842bf50", Pod:"coredns-7c65d6cfc9-rww8b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3393ae3394", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.202 [INFO][6237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.202 [INFO][6237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" iface="eth0" netns="" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.202 [INFO][6237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.202 [INFO][6237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.257 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.257 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.257 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.268 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.268 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" HandleID="k8s-pod-network.7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Workload="ip--172--31--28--238-k8s-coredns--7c65d6cfc9--rww8b-eth0" Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.271 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:59.279301 containerd[1979]: 2025-09-12 17:44:59.273 [INFO][6237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa" Sep 12 17:44:59.279301 containerd[1979]: time="2025-09-12T17:44:59.279225372Z" level=info msg="TearDown network for sandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" successfully" Sep 12 17:44:59.292594 containerd[1979]: time="2025-09-12T17:44:59.292544424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:59.292745 containerd[1979]: time="2025-09-12T17:44:59.292631908Z" level=info msg="RemovePodSandbox \"7c48dedf2699b4de097260e3668df494d616be0063196362a8789ea2ed5190fa\" returns successfully" Sep 12 17:44:59.319374 containerd[1979]: time="2025-09-12T17:44:59.319333068Z" level=info msg="StopPodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\"" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.405 [WARNING][6258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4dc518c-dc79-425e-b59d-e5f73f27e330", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d", Pod:"calico-apiserver-59f8688b7d-vc7r8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d87c54580f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.406 [INFO][6258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.406 [INFO][6258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" iface="eth0" netns="" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.406 [INFO][6258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.406 [INFO][6258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.451 [INFO][6265] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.451 [INFO][6265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.451 [INFO][6265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.461 [WARNING][6265] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.461 [INFO][6265] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.464 [INFO][6265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:59.471922 containerd[1979]: 2025-09-12 17:44:59.467 [INFO][6258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.473950 containerd[1979]: time="2025-09-12T17:44:59.471970360Z" level=info msg="TearDown network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" successfully" Sep 12 17:44:59.473950 containerd[1979]: time="2025-09-12T17:44:59.472000319Z" level=info msg="StopPodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" returns successfully" Sep 12 17:44:59.473950 containerd[1979]: time="2025-09-12T17:44:59.472735315Z" level=info msg="RemovePodSandbox for \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\"" Sep 12 17:44:59.473950 containerd[1979]: time="2025-09-12T17:44:59.472785260Z" level=info msg="Forcibly stopping sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\"" Sep 12 17:44:59.660186 containerd[1979]: time="2025-09-12T17:44:59.660145318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.538 [WARNING][6280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0", GenerateName:"calico-apiserver-59f8688b7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4dc518c-dc79-425e-b59d-e5f73f27e330", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f8688b7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-238", ContainerID:"352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d", Pod:"calico-apiserver-59f8688b7d-vc7r8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d87c54580f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.539 [INFO][6280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.539 [INFO][6280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" iface="eth0" netns="" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.539 [INFO][6280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.540 [INFO][6280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.646 [INFO][6290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.647 [INFO][6290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.647 [INFO][6290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.666 [WARNING][6290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.667 [INFO][6290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" HandleID="k8s-pod-network.6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Workload="ip--172--31--28--238-k8s-calico--apiserver--59f8688b7d--vc7r8-eth0" Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.671 [INFO][6290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:44:59.706598 containerd[1979]: 2025-09-12 17:44:59.674 [INFO][6280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983" Sep 12 17:44:59.706598 containerd[1979]: time="2025-09-12T17:44:59.661094614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:44:59.717064 containerd[1979]: time="2025-09-12T17:44:59.704178265Z" level=info msg="TearDown network for sandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" successfully" Sep 12 17:44:59.780517 containerd[1979]: time="2025-09-12T17:44:59.780388057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:44:59.781485 containerd[1979]: time="2025-09-12T17:44:59.780757772Z" level=info msg="RemovePodSandbox \"6e818a1b740aad95f514207540f921066b858c5031ebaf07f1ac4768a5a0f983\" returns successfully" Sep 12 17:44:59.782805 containerd[1979]: time="2025-09-12T17:44:59.782684226Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:59.787471 containerd[1979]: time="2025-09-12T17:44:59.787426349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:59.789199 containerd[1979]: time="2025-09-12T17:44:59.789153405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.790950353s" Sep 12 17:44:59.789739 containerd[1979]: time="2025-09-12T17:44:59.789362745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:44:59.926109 containerd[1979]: time="2025-09-12T17:44:59.917928306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:45:00.129945 containerd[1979]: time="2025-09-12T17:45:00.129900720Z" level=info msg="CreateContainer within sandbox \"28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:45:00.362254 containerd[1979]: time="2025-09-12T17:45:00.361966306Z" level=info msg="CreateContainer within sandbox \"28bd33f0789b48a8e794514e6e5977ebdee8090b7972957ab8173b9b9e86b707\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f\"" Sep 12 17:45:00.376603 containerd[1979]: time="2025-09-12T17:45:00.376560500Z" level=info msg="StartContainer for \"ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f\"" Sep 12 17:45:00.987296 systemd[1]: Started cri-containerd-ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f.scope - libcontainer container ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f. Sep 12 17:45:01.094041 containerd[1979]: time="2025-09-12T17:45:01.093964783Z" level=info msg="StartContainer for \"ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f\" returns successfully" Sep 12 17:45:02.929338 kubelet[3173]: E0912 17:45:02.909541 3173 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.116s" Sep 12 17:45:03.664348 systemd[1]: Started sshd@12-172.31.28.238:22-147.75.109.163:38448.service - OpenSSH per-connection server daemon (147.75.109.163:38448). Sep 12 17:45:03.902518 kubelet[3173]: I0912 17:45:03.778626 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-wm2jb" podStartSLOduration=32.437327257 podStartE2EDuration="47.77519572s" podCreationTimestamp="2025-09-12 17:44:16 +0000 UTC" firstStartedPulling="2025-09-12 17:44:44.511800568 +0000 UTC m=+51.074439760" lastFinishedPulling="2025-09-12 17:44:59.849669019 +0000 UTC m=+66.412308223" observedRunningTime="2025-09-12 17:45:03.707318172 +0000 UTC m=+70.269957377" watchObservedRunningTime="2025-09-12 17:45:03.77519572 +0000 UTC m=+70.337834947" Sep 12 17:45:04.259300 sshd[6342]: Accepted publickey for core from 147.75.109.163 port 38448 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:04.266705 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:04.283125 systemd-logind[1965]: New session 13 of user core. Sep 12 17:45:04.299012 systemd[1]: run-containerd-runc-k8s.io-ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f-runc.h8lB4w.mount: Deactivated successfully. Sep 12 17:45:04.309115 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:45:05.526760 systemd[1]: run-containerd-runc-k8s.io-ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f-runc.7blEAM.mount: Deactivated successfully. Sep 12 17:45:06.838279 sshd[6342]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:06.850069 systemd[1]: sshd@12-172.31.28.238:22-147.75.109.163:38448.service: Deactivated successfully. Sep 12 17:45:06.850804 systemd-logind[1965]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:45:06.855962 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:45:06.870232 systemd-logind[1965]: Removed session 13. Sep 12 17:45:07.292498 containerd[1979]: time="2025-09-12T17:45:07.292440975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:07.295153 containerd[1979]: time="2025-09-12T17:45:07.295069424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:45:07.298067 containerd[1979]: time="2025-09-12T17:45:07.297969080Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:07.313222 containerd[1979]: time="2025-09-12T17:45:07.310812911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:07.313222 containerd[1979]: time="2025-09-12T17:45:07.312884714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.394905479s" Sep 12 17:45:07.313222 containerd[1979]: time="2025-09-12T17:45:07.312936881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:45:07.318598 containerd[1979]: time="2025-09-12T17:45:07.317591167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:45:07.448116 containerd[1979]: time="2025-09-12T17:45:07.448066095Z" level=info msg="CreateContainer within sandbox \"1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:45:07.482081 containerd[1979]: time="2025-09-12T17:45:07.482013049Z" level=info msg="CreateContainer within sandbox \"1cfdbc9bb25f2ea02727338f4d7d9e73ad2278cd2568c0ac1d651d92be788e6f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"342cbe04a2f0c74a28fa95dc289ca69b2e5c8840b573d8fc495fb43c65e468ac\"" Sep 12 17:45:07.484765 containerd[1979]: time="2025-09-12T17:45:07.483112269Z" level=info msg="StartContainer for \"342cbe04a2f0c74a28fa95dc289ca69b2e5c8840b573d8fc495fb43c65e468ac\"" Sep 12 17:45:07.882454 systemd[1]: Started cri-containerd-342cbe04a2f0c74a28fa95dc289ca69b2e5c8840b573d8fc495fb43c65e468ac.scope - libcontainer container 342cbe04a2f0c74a28fa95dc289ca69b2e5c8840b573d8fc495fb43c65e468ac. Sep 12 17:45:07.941174 containerd[1979]: time="2025-09-12T17:45:07.941121347Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:07.948589 containerd[1979]: time="2025-09-12T17:45:07.948157479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:45:07.965846 containerd[1979]: time="2025-09-12T17:45:07.964048557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 646.381463ms" Sep 12 17:45:07.965846 containerd[1979]: time="2025-09-12T17:45:07.964104816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:45:07.970081 containerd[1979]: time="2025-09-12T17:45:07.969843975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:45:08.035108 containerd[1979]: time="2025-09-12T17:45:08.035064002Z" level=info msg="StartContainer for \"342cbe04a2f0c74a28fa95dc289ca69b2e5c8840b573d8fc495fb43c65e468ac\" returns successfully" Sep 12 17:45:08.073660 containerd[1979]: time="2025-09-12T17:45:08.073616235Z" level=info msg="CreateContainer within sandbox \"352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:45:08.107204 containerd[1979]: time="2025-09-12T17:45:08.107098921Z" level=info msg="CreateContainer within sandbox \"352a62ca16d964d537cf42165950e510183a82e84e24fc46bc8d7f859176a13d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32594589b4bded34443645235fc5cf447b81c19d3a0a93086388ec95fd613b21\"" Sep 12 17:45:08.108611 containerd[1979]: time="2025-09-12T17:45:08.108555277Z" level=info msg="StartContainer for \"32594589b4bded34443645235fc5cf447b81c19d3a0a93086388ec95fd613b21\"" Sep 12 17:45:08.188409 systemd[1]: Started cri-containerd-32594589b4bded34443645235fc5cf447b81c19d3a0a93086388ec95fd613b21.scope - libcontainer container 32594589b4bded34443645235fc5cf447b81c19d3a0a93086388ec95fd613b21. Sep 12 17:45:08.279375 containerd[1979]: time="2025-09-12T17:45:08.279333810Z" level=info msg="StartContainer for \"32594589b4bded34443645235fc5cf447b81c19d3a0a93086388ec95fd613b21\" returns successfully" Sep 12 17:45:09.022052 kubelet[3173]: I0912 17:45:09.020560 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f8688b7d-vc7r8" podStartSLOduration=33.787334268 podStartE2EDuration="56.000007201s" podCreationTimestamp="2025-09-12 17:44:13 +0000 UTC" firstStartedPulling="2025-09-12 17:44:45.75372734 +0000 UTC m=+52.316366532" lastFinishedPulling="2025-09-12 17:45:07.966400273 +0000 UTC m=+74.529039465" observedRunningTime="2025-09-12 17:45:08.999720903 +0000 UTC m=+75.562360109" watchObservedRunningTime="2025-09-12 17:45:09.000007201 +0000 UTC m=+75.562646426" Sep 12 17:45:09.023277 kubelet[3173]: I0912 17:45:09.023001 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-88b68d46-gcsq2" podStartSLOduration=30.433773942 podStartE2EDuration="52.022978639s" podCreationTimestamp="2025-09-12 17:44:17 +0000 UTC" firstStartedPulling="2025-09-12 17:44:45.726392317 +0000 UTC m=+52.289031511" lastFinishedPulling="2025-09-12 17:45:07.315597001 +0000 UTC m=+73.878236208" observedRunningTime="2025-09-12 17:45:08.908616833 +0000 UTC m=+75.471256038" watchObservedRunningTime="2025-09-12 17:45:09.022978639 +0000 UTC m=+75.585617845" Sep 12 17:45:11.886946 systemd[1]: Started sshd@13-172.31.28.238:22-147.75.109.163:36652.service - OpenSSH per-connection server daemon (147.75.109.163:36652). Sep 12 17:45:11.950062 containerd[1979]: time="2025-09-12T17:45:11.949330172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:11.959057 containerd[1979]: time="2025-09-12T17:45:11.951882600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:45:11.959057 containerd[1979]: time="2025-09-12T17:45:11.954721159Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:11.959057 containerd[1979]: time="2025-09-12T17:45:11.958434623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:11.959385 containerd[1979]: time="2025-09-12T17:45:11.959326056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.989440034s" Sep 12 17:45:11.959910 containerd[1979]: time="2025-09-12T17:45:11.959467390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:45:12.016922 containerd[1979]: time="2025-09-12T17:45:12.016465079Z" level=info msg="CreateContainer within sandbox \"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:45:12.049169 containerd[1979]: time="2025-09-12T17:45:12.049128833Z" level=info msg="CreateContainer within sandbox \"cdc284fea4a7fea243084fed32493fa565d5a393da7470bd38b7d7977e64542b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"33853fde1f9d86c0f6249019326e0f591f2706b821146d0eb94b7daa87fdc4b3\"" Sep 12 17:45:12.049968 containerd[1979]: time="2025-09-12T17:45:12.049944568Z" level=info msg="StartContainer for \"33853fde1f9d86c0f6249019326e0f591f2706b821146d0eb94b7daa87fdc4b3\"" Sep 12 17:45:12.134874 systemd[1]: Started cri-containerd-33853fde1f9d86c0f6249019326e0f591f2706b821146d0eb94b7daa87fdc4b3.scope - libcontainer container 33853fde1f9d86c0f6249019326e0f591f2706b821146d0eb94b7daa87fdc4b3. Sep 12 17:45:12.186333 containerd[1979]: time="2025-09-12T17:45:12.185405383Z" level=info msg="StartContainer for \"33853fde1f9d86c0f6249019326e0f591f2706b821146d0eb94b7daa87fdc4b3\" returns successfully" Sep 12 17:45:12.196980 sshd[6522]: Accepted publickey for core from 147.75.109.163 port 36652 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:12.200600 sshd[6522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:12.207289 systemd-logind[1965]: New session 14 of user core. Sep 12 17:45:12.214447 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:45:13.400625 kubelet[3173]: I0912 17:45:13.397681 3173 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:45:13.403439 kubelet[3173]: I0912 17:45:13.403413 3173 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:45:13.526291 sshd[6522]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:13.535764 systemd-logind[1965]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:45:13.536227 systemd[1]: sshd@13-172.31.28.238:22-147.75.109.163:36652.service: Deactivated successfully. Sep 12 17:45:13.541490 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:45:13.543048 systemd-logind[1965]: Removed session 14. Sep 12 17:45:18.568791 systemd[1]: Started sshd@14-172.31.28.238:22-147.75.109.163:36658.service - OpenSSH per-connection server daemon (147.75.109.163:36658). Sep 12 17:45:18.883112 sshd[6579]: Accepted publickey for core from 147.75.109.163 port 36658 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:18.896083 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:18.909294 systemd-logind[1965]: New session 15 of user core. Sep 12 17:45:18.915244 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:45:20.103668 sshd[6579]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:20.108826 systemd[1]: sshd@14-172.31.28.238:22-147.75.109.163:36658.service: Deactivated successfully. Sep 12 17:45:20.112009 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:45:20.116934 systemd-logind[1965]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:45:20.134176 systemd-logind[1965]: Removed session 15. Sep 12 17:45:20.138773 systemd[1]: Started sshd@15-172.31.28.238:22-147.75.109.163:39316.service - OpenSSH per-connection server daemon (147.75.109.163:39316). Sep 12 17:45:20.345992 sshd[6599]: Accepted publickey for core from 147.75.109.163 port 39316 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:20.348194 sshd[6599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:20.354650 systemd-logind[1965]: New session 16 of user core. Sep 12 17:45:20.359393 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:45:21.065390 sshd[6599]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:21.074786 systemd[1]: sshd@15-172.31.28.238:22-147.75.109.163:39316.service: Deactivated successfully. Sep 12 17:45:21.079120 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:45:21.080326 systemd-logind[1965]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:45:21.081528 systemd-logind[1965]: Removed session 16. Sep 12 17:45:21.100438 systemd[1]: Started sshd@16-172.31.28.238:22-147.75.109.163:39322.service - OpenSSH per-connection server daemon (147.75.109.163:39322). Sep 12 17:45:21.291393 sshd[6610]: Accepted publickey for core from 147.75.109.163 port 39322 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:21.293339 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:21.300975 systemd-logind[1965]: New session 17 of user core. Sep 12 17:45:21.304301 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:45:24.432532 sshd[6610]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:24.472923 systemd[1]: sshd@16-172.31.28.238:22-147.75.109.163:39322.service: Deactivated successfully. Sep 12 17:45:24.476203 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:45:24.476596 systemd[1]: session-17.scope: Consumed 1.202s CPU time. Sep 12 17:45:24.480247 systemd-logind[1965]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:45:24.486526 systemd[1]: Started sshd@17-172.31.28.238:22-147.75.109.163:39330.service - OpenSSH per-connection server daemon (147.75.109.163:39330). Sep 12 17:45:24.491092 systemd-logind[1965]: Removed session 17. Sep 12 17:45:24.741266 sshd[6632]: Accepted publickey for core from 147.75.109.163 port 39330 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:24.742977 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:24.750050 systemd-logind[1965]: New session 18 of user core. Sep 12 17:45:24.755300 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:45:28.012743 sshd[6632]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:28.089829 systemd[1]: sshd@17-172.31.28.238:22-147.75.109.163:39330.service: Deactivated successfully. Sep 12 17:45:28.095727 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:45:28.096158 systemd[1]: session-18.scope: Consumed 1.018s CPU time. Sep 12 17:45:28.097563 systemd-logind[1965]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:45:28.109666 systemd[1]: Started sshd@18-172.31.28.238:22-147.75.109.163:39342.service - OpenSSH per-connection server daemon (147.75.109.163:39342). Sep 12 17:45:28.114060 systemd-logind[1965]: Removed session 18. Sep 12 17:45:28.544047 sshd[6668]: Accepted publickey for core from 147.75.109.163 port 39342 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:28.550842 sshd[6668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:28.580310 systemd-logind[1965]: New session 19 of user core. Sep 12 17:45:28.583232 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:45:28.677609 systemd[1]: run-containerd-runc-k8s.io-ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f-runc.bIZIFq.mount: Deactivated successfully. Sep 12 17:45:30.127956 systemd[1]: run-containerd-runc-k8s.io-ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f-runc.Za4bqM.mount: Deactivated successfully. Sep 12 17:45:32.502887 kubelet[3173]: I0912 17:45:32.469176 3173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b4wvz" podStartSLOduration=45.96254372 podStartE2EDuration="1m15.434206852s" podCreationTimestamp="2025-09-12 17:44:17 +0000 UTC" firstStartedPulling="2025-09-12 17:44:42.4895631 +0000 UTC m=+49.052202285" lastFinishedPulling="2025-09-12 17:45:11.961226231 +0000 UTC m=+78.523865417" observedRunningTime="2025-09-12 17:45:13.176220045 +0000 UTC m=+79.738859251" watchObservedRunningTime="2025-09-12 17:45:32.434206852 +0000 UTC m=+98.996846057" Sep 12 17:45:33.142912 sshd[6668]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:33.203483 systemd[1]: sshd@18-172.31.28.238:22-147.75.109.163:39342.service: Deactivated successfully. Sep 12 17:45:33.205243 systemd-logind[1965]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:45:33.212457 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:45:33.212657 systemd[1]: session-19.scope: Consumed 1.444s CPU time. Sep 12 17:45:33.224791 systemd-logind[1965]: Removed session 19. Sep 12 17:45:38.227623 systemd[1]: Started sshd@19-172.31.28.238:22-147.75.109.163:59002.service - OpenSSH per-connection server daemon (147.75.109.163:59002). Sep 12 17:45:38.637142 sshd[6747]: Accepted publickey for core from 147.75.109.163 port 59002 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:38.638612 sshd[6747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:38.647695 systemd-logind[1965]: New session 20 of user core. Sep 12 17:45:38.655276 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:45:40.942911 sshd[6747]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:40.947729 systemd[1]: sshd@19-172.31.28.238:22-147.75.109.163:59002.service: Deactivated successfully. Sep 12 17:45:40.949216 systemd-logind[1965]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:45:40.953691 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:45:40.958745 systemd-logind[1965]: Removed session 20. Sep 12 17:45:45.994473 systemd[1]: Started sshd@20-172.31.28.238:22-147.75.109.163:42098.service - OpenSSH per-connection server daemon (147.75.109.163:42098). Sep 12 17:45:46.313934 sshd[6761]: Accepted publickey for core from 147.75.109.163 port 42098 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:46.317505 sshd[6761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:46.327071 systemd-logind[1965]: New session 21 of user core. Sep 12 17:45:46.335461 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:45:47.834730 sshd[6761]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:47.840228 systemd[1]: sshd@20-172.31.28.238:22-147.75.109.163:42098.service: Deactivated successfully. Sep 12 17:45:47.844199 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:45:47.845867 systemd-logind[1965]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:45:47.849677 systemd-logind[1965]: Removed session 21. Sep 12 17:45:52.889259 systemd[1]: Started sshd@21-172.31.28.238:22-147.75.109.163:35922.service - OpenSSH per-connection server daemon (147.75.109.163:35922). Sep 12 17:45:53.191130 sshd[6775]: Accepted publickey for core from 147.75.109.163 port 35922 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:45:53.191928 sshd[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:45:53.199762 systemd-logind[1965]: New session 22 of user core. Sep 12 17:45:53.207453 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:45:55.270799 sshd[6775]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:55.285491 systemd[1]: sshd@21-172.31.28.238:22-147.75.109.163:35922.service: Deactivated successfully. Sep 12 17:45:55.287146 systemd-logind[1965]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:45:55.292439 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:45:55.305206 systemd-logind[1965]: Removed session 22. Sep 12 17:46:00.363487 systemd[1]: Started sshd@22-172.31.28.238:22-147.75.109.163:42592.service - OpenSSH per-connection server daemon (147.75.109.163:42592). Sep 12 17:46:00.809049 sshd[6854]: Accepted publickey for core from 147.75.109.163 port 42592 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:46:00.820187 sshd[6854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:00.837589 systemd-logind[1965]: New session 23 of user core. Sep 12 17:46:00.843248 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:46:05.907757 sshd[6854]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:05.927731 systemd[1]: sshd@22-172.31.28.238:22-147.75.109.163:42592.service: Deactivated successfully. Sep 12 17:46:05.927935 systemd-logind[1965]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:46:05.935372 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:46:05.943375 systemd-logind[1965]: Removed session 23. Sep 12 17:46:09.991887 systemd[1]: run-containerd-runc-k8s.io-342cbe04a2f0c74a28fa95dc289ca69b2e5c8840b573d8fc495fb43c65e468ac-runc.MoI72L.mount: Deactivated successfully. Sep 12 17:46:19.131427 systemd[1]: cri-containerd-8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd.scope: Deactivated successfully. Sep 12 17:46:19.132392 systemd[1]: cri-containerd-8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd.scope: Consumed 13.563s CPU time. Sep 12 17:46:19.337490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd-rootfs.mount: Deactivated successfully. Sep 12 17:46:19.396223 containerd[1979]: time="2025-09-12T17:46:19.357197775Z" level=info msg="shim disconnected" id=8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd namespace=k8s.io Sep 12 17:46:19.396223 containerd[1979]: time="2025-09-12T17:46:19.396119804Z" level=warning msg="cleaning up after shim disconnected" id=8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd namespace=k8s.io Sep 12 17:46:19.396223 containerd[1979]: time="2025-09-12T17:46:19.396139646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:46:19.635739 systemd[1]: cri-containerd-d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a.scope: Deactivated successfully. Sep 12 17:46:19.637411 systemd[1]: cri-containerd-d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a.scope: Consumed 3.940s CPU time, 23.3M memory peak, 0B memory swap peak. Sep 12 17:46:19.665620 containerd[1979]: time="2025-09-12T17:46:19.665005752Z" level=info msg="shim disconnected" id=d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a namespace=k8s.io Sep 12 17:46:19.665620 containerd[1979]: time="2025-09-12T17:46:19.665124301Z" level=warning msg="cleaning up after shim disconnected" id=d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a namespace=k8s.io Sep 12 17:46:19.665620 containerd[1979]: time="2025-09-12T17:46:19.665139684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:46:19.671584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a-rootfs.mount: Deactivated successfully. Sep 12 17:46:20.484368 kubelet[3173]: I0912 17:46:20.475957 3173 scope.go:117] "RemoveContainer" containerID="8365fb2c7d17dbf8439394f2e649e1696252c4b12ef3ecdf339cae2d358553bd" Sep 12 17:46:20.503947 kubelet[3173]: I0912 17:46:20.503808 3173 scope.go:117] "RemoveContainer" containerID="d0ba681c93144d4edb186b1d644c12fdf6aa087b91cfeb8850139b52e8ee7c8a" Sep 12 17:46:20.576173 containerd[1979]: time="2025-09-12T17:46:20.576121989Z" level=info msg="CreateContainer within sandbox \"9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:46:20.576828 containerd[1979]: time="2025-09-12T17:46:20.576124423Z" level=info msg="CreateContainer within sandbox \"b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:46:20.702203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303702445.mount: Deactivated successfully. Sep 12 17:46:20.720440 containerd[1979]: time="2025-09-12T17:46:20.719076368Z" level=info msg="CreateContainer within sandbox \"b07e2f2791ef05699d732897af28357c87e2b89e9e4bceac1792ac046396622b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1db151d44c0ca8464bfdb562530fdc3fcef7df25743387f1e995d19519861a0c\"" Sep 12 17:46:20.720748 containerd[1979]: time="2025-09-12T17:46:20.720714931Z" level=info msg="CreateContainer within sandbox \"9837bb8d582f49931dc3726e6bffe9f5fd1c8170831396c70154fbe104d9fde5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fc45fb62d9b24ef63caf6255e736e4dc606cc5234d336fc8a055bc26dbcd5e78\"" Sep 12 17:46:20.722429 containerd[1979]: time="2025-09-12T17:46:20.722403847Z" level=info msg="StartContainer for \"1db151d44c0ca8464bfdb562530fdc3fcef7df25743387f1e995d19519861a0c\"" Sep 12 17:46:20.722951 containerd[1979]: time="2025-09-12T17:46:20.722737487Z" level=info msg="StartContainer for \"fc45fb62d9b24ef63caf6255e736e4dc606cc5234d336fc8a055bc26dbcd5e78\"" Sep 12 17:46:20.846259 systemd[1]: Started cri-containerd-1db151d44c0ca8464bfdb562530fdc3fcef7df25743387f1e995d19519861a0c.scope - libcontainer container 1db151d44c0ca8464bfdb562530fdc3fcef7df25743387f1e995d19519861a0c. Sep 12 17:46:20.857104 systemd[1]: Started cri-containerd-fc45fb62d9b24ef63caf6255e736e4dc606cc5234d336fc8a055bc26dbcd5e78.scope - libcontainer container fc45fb62d9b24ef63caf6255e736e4dc606cc5234d336fc8a055bc26dbcd5e78. Sep 12 17:46:20.904347 containerd[1979]: time="2025-09-12T17:46:20.904226393Z" level=info msg="StartContainer for \"1db151d44c0ca8464bfdb562530fdc3fcef7df25743387f1e995d19519861a0c\" returns successfully" Sep 12 17:46:20.947064 containerd[1979]: time="2025-09-12T17:46:20.946954253Z" level=info msg="StartContainer for \"fc45fb62d9b24ef63caf6255e736e4dc606cc5234d336fc8a055bc26dbcd5e78\" returns successfully" Sep 12 17:46:24.594738 systemd[1]: cri-containerd-facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f.scope: Deactivated successfully. Sep 12 17:46:24.595279 systemd[1]: cri-containerd-facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f.scope: Consumed 2.357s CPU time, 20.4M memory peak, 0B memory swap peak. Sep 12 17:46:24.634269 containerd[1979]: time="2025-09-12T17:46:24.634190061Z" level=info msg="shim disconnected" id=facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f namespace=k8s.io Sep 12 17:46:24.634754 containerd[1979]: time="2025-09-12T17:46:24.634305712Z" level=warning msg="cleaning up after shim disconnected" id=facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f namespace=k8s.io Sep 12 17:46:24.634754 containerd[1979]: time="2025-09-12T17:46:24.634320977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:46:24.640289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f-rootfs.mount: Deactivated successfully. Sep 12 17:46:24.947512 systemd[1]: run-containerd-runc-k8s.io-1a0369653e9f78e412fa69c3fb18eb8e0b27bbe73e8d85a0697879bb7e68c5bc-runc.7dpFlu.mount: Deactivated successfully. Sep 12 17:46:25.476558 kubelet[3173]: I0912 17:46:25.476523 3173 scope.go:117] "RemoveContainer" containerID="facc259dad7dc45855b5769848bc11a07d55bd1c51dc84eb48e4d8ec63f5fd1f" Sep 12 17:46:25.480100 containerd[1979]: time="2025-09-12T17:46:25.479098418Z" level=info msg="CreateContainer within sandbox \"2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:46:25.516814 containerd[1979]: time="2025-09-12T17:46:25.516725344Z" level=info msg="CreateContainer within sandbox \"2f81416183ca2d4319d4f78fb7c1c526082450b8179e01388a165da7954171f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"89991549c7d0c22ddd6cf5199e21981c533c519c40e756fb10287b36807f9778\"" Sep 12 17:46:25.517399 containerd[1979]: time="2025-09-12T17:46:25.517364471Z" level=info msg="StartContainer for \"89991549c7d0c22ddd6cf5199e21981c533c519c40e756fb10287b36807f9778\"" Sep 12 17:46:25.555451 systemd[1]: Started cri-containerd-89991549c7d0c22ddd6cf5199e21981c533c519c40e756fb10287b36807f9778.scope - libcontainer container 89991549c7d0c22ddd6cf5199e21981c533c519c40e756fb10287b36807f9778. Sep 12 17:46:25.619407 containerd[1979]: time="2025-09-12T17:46:25.619358639Z" level=info msg="StartContainer for \"89991549c7d0c22ddd6cf5199e21981c533c519c40e756fb10287b36807f9778\" returns successfully" Sep 12 17:46:29.246948 kubelet[3173]: E0912 17:46:29.240297 3173 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-28-238)" Sep 12 17:46:29.866395 systemd[1]: run-containerd-runc-k8s.io-ee5f1f5af47a4ae20790dd8d77ce5815bd668d380579513a33c1c89a151def5f-runc.PXCgCY.mount: Deactivated successfully.