Sep 13 00:10:59.951875 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:10:59.951912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:59.951931 kernel: BIOS-provided physical RAM map: Sep 13 00:10:59.951942 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:10:59.951951 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:10:59.951961 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 13 00:10:59.951974 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 13 00:10:59.951985 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:10:59.951996 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:10:59.952010 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:10:59.952021 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:10:59.952033 kernel: NX (Execute Disable) protection: active Sep 13 00:10:59.952044 kernel: APIC: Static calls initialized Sep 13 00:10:59.952055 kernel: efi: EFI v2.7 by EDK II Sep 13 00:10:59.952068 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 13 00:10:59.952084 kernel: SMBIOS 2.7 present. Sep 13 00:10:59.952096 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:10:59.952108 kernel: Hypervisor detected: KVM Sep 13 00:10:59.952120 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:10:59.952133 kernel: kvm-clock: using sched offset of 3724260547 cycles Sep 13 00:10:59.952147 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:10:59.952160 kernel: tsc: Detected 2499.994 MHz processor Sep 13 00:10:59.952172 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:10:59.952184 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:10:59.952559 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:10:59.952579 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:10:59.952593 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:10:59.952607 kernel: Using GB pages for direct mapping Sep 13 00:10:59.952636 kernel: Secure boot disabled Sep 13 00:10:59.952656 kernel: ACPI: Early table checksum verification disabled Sep 13 00:10:59.952668 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:10:59.952680 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:10:59.952692 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:10:59.952704 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:10:59.952723 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:10:59.952736 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:10:59.952750 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:10:59.952764 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:10:59.952778 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:10:59.952792 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:10:59.952811 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:10:59.952828 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:10:59.952842 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:10:59.952856 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:10:59.952871 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:10:59.952885 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:10:59.952899 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:10:59.952917 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:10:59.952930 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:10:59.952944 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:10:59.952959 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:10:59.952974 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:10:59.952989 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:10:59.953002 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:10:59.953017 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:10:59.953032 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:10:59.953049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:10:59.953068 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:10:59.953082 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:10:59.953095 kernel: Zone ranges: Sep 13 00:10:59.953109 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:10:59.953121 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:10:59.953134 kernel: Normal empty Sep 13 00:10:59.953147 kernel: Movable zone start for each node Sep 13 00:10:59.953160 kernel: Early memory node ranges Sep 13 00:10:59.953173 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:10:59.953192 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:10:59.953205 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:10:59.953219 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:10:59.953231 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:10:59.953245 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:10:59.953258 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:10:59.953271 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:10:59.953286 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:10:59.953299 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:10:59.953317 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:10:59.953331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:10:59.953346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:10:59.953361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:10:59.953375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:10:59.953388 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:10:59.953401 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:10:59.953414 kernel: TSC deadline timer available Sep 13 00:10:59.953427 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:10:59.953445 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:10:59.953460 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:10:59.953475 kernel: Booting paravirtualized kernel on KVM Sep 13 00:10:59.953489 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:10:59.953502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:10:59.953517 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:10:59.953533 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:10:59.953546 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:10:59.953562 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:10:59.953576 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:10:59.953597 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:59.954649 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:10:59.954675 kernel: random: crng init done Sep 13 00:10:59.954691 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:10:59.954708 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:10:59.954724 kernel: Fallback order for Node 0: 0 Sep 13 00:10:59.954740 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:10:59.954761 kernel: Policy zone: DMA32 Sep 13 00:10:59.954777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:10:59.954793 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162936K reserved, 0K cma-reserved) Sep 13 00:10:59.954810 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:10:59.954826 kernel: Kernel/User page tables isolation: enabled Sep 13 00:10:59.954842 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:10:59.954858 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:10:59.954873 kernel: Dynamic Preempt: voluntary Sep 13 00:10:59.954889 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:10:59.954910 kernel: rcu: RCU event tracing is enabled. Sep 13 00:10:59.954935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:10:59.954951 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:10:59.954968 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:10:59.954984 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:10:59.954999 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:10:59.955015 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:10:59.955032 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:10:59.955063 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:10:59.955080 kernel: Console: colour dummy device 80x25 Sep 13 00:10:59.955096 kernel: printk: console [tty0] enabled Sep 13 00:10:59.955113 kernel: printk: console [ttyS0] enabled Sep 13 00:10:59.955133 kernel: ACPI: Core revision 20230628 Sep 13 00:10:59.955150 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:10:59.955167 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:10:59.955184 kernel: x2apic enabled Sep 13 00:10:59.955200 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:10:59.955218 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Sep 13 00:10:59.955238 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Sep 13 00:10:59.955255 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:10:59.955272 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:10:59.955289 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:10:59.955305 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:10:59.955321 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:10:59.955338 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:10:59.955355 kernel: RETBleed: Vulnerable Sep 13 00:10:59.955371 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:10:59.955391 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:10:59.955407 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:10:59.955424 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:10:59.955440 kernel: active return thunk: its_return_thunk Sep 13 00:10:59.955457 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:10:59.955473 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:10:59.955490 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:10:59.955507 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:10:59.955523 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:10:59.955540 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:10:59.955556 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:10:59.955576 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:10:59.955592 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:10:59.955609 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:10:59.955640 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:10:59.955657 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:10:59.955674 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:10:59.955691 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:10:59.955708 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:10:59.955725 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:10:59.955741 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:10:59.955758 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:10:59.955779 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:10:59.955795 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:10:59.955812 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:10:59.955828 kernel: landlock: Up and running. Sep 13 00:10:59.955844 kernel: SELinux: Initializing. Sep 13 00:10:59.955861 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:10:59.955878 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:10:59.955895 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:10:59.955912 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:59.955929 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:59.955946 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:59.955966 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:10:59.955983 kernel: signal: max sigframe size: 3632 Sep 13 00:10:59.956000 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:10:59.956017 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:10:59.956034 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:10:59.956051 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:10:59.956069 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:10:59.956085 kernel: .... node #0, CPUs: #1 Sep 13 00:10:59.956103 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:10:59.956124 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:10:59.956141 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:10:59.956158 kernel: smpboot: Max logical packages: 1 Sep 13 00:10:59.956174 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Sep 13 00:10:59.956191 kernel: devtmpfs: initialized Sep 13 00:10:59.956208 kernel: x86/mm: Memory block size: 128MB Sep 13 00:10:59.956225 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:10:59.956242 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:10:59.956263 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:10:59.956280 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:10:59.956297 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:10:59.956314 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:10:59.956331 kernel: audit: type=2000 audit(1757722260.015:1): state=initialized audit_enabled=0 res=1 Sep 13 00:10:59.956348 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:10:59.956365 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:10:59.956382 kernel: cpuidle: using governor menu Sep 13 00:10:59.956398 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:10:59.956419 kernel: dca service started, version 1.12.1 Sep 13 00:10:59.956436 kernel: PCI: Using configuration type 1 for base access Sep 13 00:10:59.956453 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:10:59.956470 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:10:59.956487 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:10:59.956503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:10:59.956520 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:10:59.956537 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:10:59.956554 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:10:59.956574 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:10:59.956591 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:10:59.956608 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:10:59.958680 kernel: ACPI: Interpreter enabled Sep 13 00:10:59.958702 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:10:59.958723 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:10:59.958743 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:10:59.958763 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:10:59.958783 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:10:59.958804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:10:59.959093 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:10:59.959289 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:10:59.959485 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:10:59.959506 kernel: acpiphp: Slot [3] registered Sep 13 00:10:59.959523 kernel: acpiphp: Slot [4] registered Sep 13 00:10:59.959541 kernel: acpiphp: Slot [5] registered Sep 13 00:10:59.959558 kernel: acpiphp: Slot [6] registered Sep 13 00:10:59.959582 kernel: acpiphp: Slot [7] registered Sep 13 00:10:59.959598 kernel: acpiphp: Slot [8] registered Sep 13 00:10:59.959635 kernel: acpiphp: Slot [9] registered Sep 13 00:10:59.959652 kernel: acpiphp: Slot [10] registered Sep 13 00:10:59.959669 kernel: acpiphp: Slot [11] registered Sep 13 00:10:59.959685 kernel: acpiphp: Slot [12] registered Sep 13 00:10:59.959703 kernel: acpiphp: Slot [13] registered Sep 13 00:10:59.959720 kernel: acpiphp: Slot [14] registered Sep 13 00:10:59.959737 kernel: acpiphp: Slot [15] registered Sep 13 00:10:59.959761 kernel: acpiphp: Slot [16] registered Sep 13 00:10:59.959778 kernel: acpiphp: Slot [17] registered Sep 13 00:10:59.959795 kernel: acpiphp: Slot [18] registered Sep 13 00:10:59.959812 kernel: acpiphp: Slot [19] registered Sep 13 00:10:59.959829 kernel: acpiphp: Slot [20] registered Sep 13 00:10:59.959846 kernel: acpiphp: Slot [21] registered Sep 13 00:10:59.959863 kernel: acpiphp: Slot [22] registered Sep 13 00:10:59.959881 kernel: acpiphp: Slot [23] registered Sep 13 00:10:59.959899 kernel: acpiphp: Slot [24] registered Sep 13 00:10:59.959916 kernel: acpiphp: Slot [25] registered Sep 13 00:10:59.959939 kernel: acpiphp: Slot [26] registered Sep 13 00:10:59.959956 kernel: acpiphp: Slot [27] registered Sep 13 00:10:59.959973 kernel: acpiphp: Slot [28] registered Sep 13 00:10:59.959990 kernel: acpiphp: Slot [29] registered Sep 13 00:10:59.960007 kernel: acpiphp: Slot [30] registered Sep 13 00:10:59.960023 kernel: acpiphp: Slot [31] registered Sep 13 00:10:59.960040 kernel: PCI host bridge to bus 0000:00 Sep 13 00:10:59.960224 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:10:59.960366 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:10:59.960490 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:10:59.960609 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:10:59.966134 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:10:59.966274 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:10:59.966450 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:10:59.966605 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:10:59.966798 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:10:59.966958 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:10:59.967108 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:10:59.967251 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:10:59.967394 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:10:59.967527 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:10:59.967732 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:10:59.967871 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:10:59.968013 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:10:59.968143 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:10:59.968283 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:10:59.968413 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:10:59.968547 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:10:59.968709 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:10:59.968844 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:10:59.968987 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:10:59.969130 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:10:59.969148 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:10:59.969162 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:10:59.969176 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:10:59.969189 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:10:59.969208 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:10:59.969222 kernel: iommu: Default domain type: Translated Sep 13 00:10:59.969234 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:10:59.969248 kernel: efivars: Registered efivars operations Sep 13 00:10:59.969262 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:10:59.969278 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:10:59.969292 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:10:59.969307 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:10:59.969453 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:10:59.969601 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:10:59.969809 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:10:59.969830 kernel: vgaarb: loaded Sep 13 00:10:59.969847 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:10:59.969863 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:10:59.969880 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:10:59.969896 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:10:59.969913 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:10:59.969934 kernel: pnp: PnP ACPI init Sep 13 00:10:59.969950 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:10:59.969966 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:10:59.969982 kernel: NET: Registered PF_INET protocol family Sep 13 00:10:59.969998 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:10:59.970014 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:10:59.970031 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:10:59.970046 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:10:59.970062 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:10:59.970082 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:10:59.970098 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:10:59.970114 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:10:59.970130 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:10:59.970146 kernel: NET: Registered PF_XDP protocol family Sep 13 00:10:59.970303 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:10:59.970432 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:10:59.970555 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:10:59.970744 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:10:59.970865 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:10:59.971017 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:10:59.971036 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:10:59.971051 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:10:59.971066 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Sep 13 00:10:59.971081 kernel: clocksource: Switched to clocksource tsc Sep 13 00:10:59.971095 kernel: Initialise system trusted keyrings Sep 13 00:10:59.971110 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:10:59.971129 kernel: Key type asymmetric registered Sep 13 00:10:59.971143 kernel: Asymmetric key parser 'x509' registered Sep 13 00:10:59.971157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:10:59.971172 kernel: io scheduler mq-deadline registered Sep 13 00:10:59.971186 kernel: io scheduler kyber registered Sep 13 00:10:59.971200 kernel: io scheduler bfq registered Sep 13 00:10:59.971216 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:10:59.971230 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:10:59.971245 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:10:59.971263 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:10:59.971278 kernel: i8042: Warning: Keylock active Sep 13 00:10:59.971292 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:10:59.971307 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:10:59.971449 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:10:59.971573 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:10:59.971708 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:10:59 UTC (1757722259) Sep 13 00:10:59.971835 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:10:59.971852 kernel: intel_pstate: CPU model not supported Sep 13 00:10:59.971867 kernel: efifb: probing for efifb Sep 13 00:10:59.971881 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 13 00:10:59.971896 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:10:59.971911 kernel: efifb: scrolling: redraw Sep 13 00:10:59.971926 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:10:59.971940 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:10:59.971955 kernel: fb0: EFI VGA frame buffer device Sep 13 00:10:59.971969 kernel: pstore: Using crash dump compression: deflate Sep 13 00:10:59.971987 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:10:59.972002 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:10:59.972016 kernel: Segment Routing with IPv6 Sep 13 00:10:59.972030 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:10:59.972044 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:10:59.972059 kernel: Key type dns_resolver registered Sep 13 00:10:59.972097 kernel: IPI shorthand broadcast: enabled Sep 13 00:10:59.972115 kernel: sched_clock: Marking stable (468002808, 138057494)->(699514688, -93454386) Sep 13 00:10:59.972131 kernel: registered taskstats version 1 Sep 13 00:10:59.972149 kernel: Loading compiled-in X.509 certificates Sep 13 00:10:59.972165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:10:59.972179 kernel: Key type .fscrypt registered Sep 13 00:10:59.972194 kernel: Key type fscrypt-provisioning registered Sep 13 00:10:59.972208 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:10:59.972224 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:10:59.972239 kernel: ima: No architecture policies found Sep 13 00:10:59.972254 kernel: clk: Disabling unused clocks Sep 13 00:10:59.972272 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:10:59.972288 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:10:59.972304 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:10:59.972318 kernel: Run /init as init process Sep 13 00:10:59.972333 kernel: with arguments: Sep 13 00:10:59.972348 kernel: /init Sep 13 00:10:59.972363 kernel: with environment: Sep 13 00:10:59.972377 kernel: HOME=/ Sep 13 00:10:59.972392 kernel: TERM=linux Sep 13 00:10:59.972406 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:10:59.972428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:10:59.972448 systemd[1]: Detected virtualization amazon. Sep 13 00:10:59.972464 systemd[1]: Detected architecture x86-64. Sep 13 00:10:59.972479 systemd[1]: Running in initrd. Sep 13 00:10:59.972494 systemd[1]: No hostname configured, using default hostname. Sep 13 00:10:59.972509 systemd[1]: Hostname set to . Sep 13 00:10:59.972529 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:10:59.972544 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:10:59.972560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:59.972576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:59.972593 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:10:59.972619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:10:59.972637 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:10:59.972657 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:10:59.972674 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:10:59.972690 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:10:59.972706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:59.972722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:59.972741 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:10:59.972757 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:10:59.972773 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:10:59.972787 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:10:59.972802 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:59.972819 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:59.972836 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:10:59.972852 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:10:59.972869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:59.972890 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:59.972907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:59.972924 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:10:59.972940 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:10:59.972957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:10:59.972973 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:10:59.972990 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:10:59.973007 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:10:59.973026 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:10:59.973044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:59.973060 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:59.973077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:59.973122 systemd-journald[178]: Collecting audit messages is disabled. Sep 13 00:10:59.973163 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:10:59.973181 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:10:59.973199 systemd-journald[178]: Journal started Sep 13 00:10:59.973236 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2ca91c941f11cda731e8ae51cbbcdc) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:10:59.979669 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:10:59.978667 systemd-modules-load[179]: Inserted module 'overlay' Sep 13 00:10:59.980953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:59.985603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:59.999929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:11:00.002833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:11:00.007403 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:11:00.007713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:11:00.027643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:11:00.031194 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 13 00:11:00.031984 kernel: Bridge firewalling registered Sep 13 00:11:00.034331 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:11:00.037076 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:11:00.049932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:11:00.053823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:11:00.054878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:11:00.058025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:11:00.070745 dracut-cmdline[206]: dracut-dracut-053 Sep 13 00:11:00.075438 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:11:00.079212 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:11:00.090174 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:11:00.132815 systemd-resolved[229]: Positive Trust Anchors: Sep 13 00:11:00.133772 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:11:00.133838 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:11:00.142644 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 13 00:11:00.144594 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:11:00.146227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:11:00.179645 kernel: SCSI subsystem initialized Sep 13 00:11:00.190664 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:11:00.205643 kernel: iscsi: registered transport (tcp) Sep 13 00:11:00.233964 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:11:00.234052 kernel: QLogic iSCSI HBA Driver Sep 13 00:11:00.303916 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:11:00.311933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:11:00.350633 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:11:00.350712 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:11:00.355102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:11:00.455756 kernel: raid6: avx512x4 gen() 12027 MB/s Sep 13 00:11:00.474149 kernel: raid6: avx512x2 gen() 10070 MB/s Sep 13 00:11:00.492979 kernel: raid6: avx512x1 gen() 9026 MB/s Sep 13 00:11:00.510702 kernel: raid6: avx2x4 gen() 8762 MB/s Sep 13 00:11:00.531668 kernel: raid6: avx2x2 gen() 8402 MB/s Sep 13 00:11:00.553420 kernel: raid6: avx2x1 gen() 7661 MB/s Sep 13 00:11:00.553502 kernel: raid6: using algorithm avx512x4 gen() 12027 MB/s Sep 13 00:11:00.574512 kernel: raid6: .... xor() 4141 MB/s, rmw enabled Sep 13 00:11:00.574598 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:11:00.623661 kernel: xor: automatically using best checksumming function avx Sep 13 00:11:01.027454 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:11:01.063701 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:11:01.077933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:11:01.139919 systemd-udevd[397]: Using default interface naming scheme 'v255'. Sep 13 00:11:01.155496 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:11:01.169378 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:11:01.218890 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Sep 13 00:11:01.291476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:11:01.302371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:11:01.390148 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:11:01.396925 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:11:01.435726 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:11:01.438197 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:11:01.439750 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:11:01.441190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:11:01.448008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:11:01.474174 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:11:01.507026 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:11:01.533892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:11:01.534079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:11:01.534968 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:11:01.535570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:11:01.537600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:11:01.538268 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:11:01.547143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:11:01.569956 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:11:01.569995 kernel: AES CTR mode by8 optimization enabled Sep 13 00:11:01.570014 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:11:01.570254 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:11:01.570418 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:11:01.570604 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:11:01.570822 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:11:01.570847 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:5a:fb:df:6e:5f Sep 13 00:11:01.571518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:11:01.571696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:11:01.582505 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:11:01.587711 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:11:01.587784 kernel: GPT:9289727 != 16777215 Sep 13 00:11:01.587805 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:11:01.587825 kernel: GPT:9289727 != 16777215 Sep 13 00:11:01.587843 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:11:01.587872 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:11:01.586856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:11:01.593226 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:11:01.625217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:11:01.632925 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:11:01.661530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:11:01.685483 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (444) Sep 13 00:11:01.699677 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (453) Sep 13 00:11:01.715024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 13 00:11:01.743347 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 13 00:11:01.781588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:11:01.788011 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 13 00:11:01.788601 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 13 00:11:01.801911 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:11:01.810379 disk-uuid[627]: Primary Header is updated. Sep 13 00:11:01.810379 disk-uuid[627]: Secondary Entries is updated. Sep 13 00:11:01.810379 disk-uuid[627]: Secondary Header is updated. Sep 13 00:11:01.823637 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:11:01.835791 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:11:01.848661 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:11:02.846727 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:11:02.847658 disk-uuid[628]: The operation has completed successfully. Sep 13 00:11:02.994262 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:11:02.994408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:11:03.018855 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:11:03.023233 sh[971]: Success Sep 13 00:11:03.039652 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:11:03.156722 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:11:03.171761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:11:03.175345 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:11:03.216898 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:11:03.216989 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:11:03.219110 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:11:03.222582 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:11:03.222644 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:11:03.264714 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:11:03.288308 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:11:03.289342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:11:03.297569 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:11:03.299872 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:11:03.333992 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:11:03.334073 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:11:03.334099 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:11:03.353645 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:11:03.367232 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:11:03.372138 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:11:03.380863 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:11:03.387937 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:11:03.413675 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:11:03.419888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:11:03.453859 systemd-networkd[1163]: lo: Link UP Sep 13 00:11:03.453874 systemd-networkd[1163]: lo: Gained carrier Sep 13 00:11:03.455811 systemd-networkd[1163]: Enumeration completed Sep 13 00:11:03.456257 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:11:03.456262 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:11:03.457444 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:11:03.462511 systemd[1]: Reached target network.target - Network. Sep 13 00:11:03.463488 systemd-networkd[1163]: eth0: Link UP Sep 13 00:11:03.463495 systemd-networkd[1163]: eth0: Gained carrier Sep 13 00:11:03.463511 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:11:03.473704 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.31.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:11:03.673659 ignition[1130]: Ignition 2.19.0 Sep 13 00:11:03.673673 ignition[1130]: Stage: fetch-offline Sep 13 00:11:03.673968 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:03.673981 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:03.676807 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:11:03.674318 ignition[1130]: Ignition finished successfully Sep 13 00:11:03.680826 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:11:03.707400 ignition[1171]: Ignition 2.19.0 Sep 13 00:11:03.707414 ignition[1171]: Stage: fetch Sep 13 00:11:03.707962 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:03.707977 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:03.708099 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:03.745169 ignition[1171]: PUT result: OK Sep 13 00:11:03.748770 ignition[1171]: parsed url from cmdline: "" Sep 13 00:11:03.748783 ignition[1171]: no config URL provided Sep 13 00:11:03.748794 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:11:03.748811 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:11:03.748837 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:03.764682 ignition[1171]: PUT result: OK Sep 13 00:11:03.764763 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:11:03.767761 ignition[1171]: GET result: OK Sep 13 00:11:03.767875 ignition[1171]: parsing config with SHA512: 9be7318597ac30dc34b9e1c00694c00806b419cf4fbaabe2be18a7acf4f3d313e2fb48fb87ee5e13fbc51da29f6fc3000cc969639a68e682ab979f37e95794fc Sep 13 00:11:03.808013 unknown[1171]: fetched base config from "system" Sep 13 00:11:03.808028 unknown[1171]: fetched base config from "system" Sep 13 00:11:03.810955 ignition[1171]: fetch: fetch complete Sep 13 00:11:03.808038 unknown[1171]: fetched user config from "aws" Sep 13 00:11:03.810964 ignition[1171]: fetch: fetch passed Sep 13 00:11:03.811044 ignition[1171]: Ignition finished successfully Sep 13 00:11:03.813320 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:11:03.836959 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:11:03.853467 ignition[1177]: Ignition 2.19.0 Sep 13 00:11:03.853482 ignition[1177]: Stage: kargs Sep 13 00:11:03.854014 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:03.854030 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:03.854154 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:03.855175 ignition[1177]: PUT result: OK Sep 13 00:11:03.858050 ignition[1177]: kargs: kargs passed Sep 13 00:11:03.858116 ignition[1177]: Ignition finished successfully Sep 13 00:11:03.859868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:11:03.866001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:11:03.882590 ignition[1183]: Ignition 2.19.0 Sep 13 00:11:03.882605 ignition[1183]: Stage: disks Sep 13 00:11:03.883279 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:03.883295 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:03.883424 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:03.884535 ignition[1183]: PUT result: OK Sep 13 00:11:03.889515 ignition[1183]: disks: disks passed Sep 13 00:11:03.889582 ignition[1183]: Ignition finished successfully Sep 13 00:11:03.891494 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:11:03.892416 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:11:03.892874 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:11:03.893731 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:11:03.894307 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:11:03.895201 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:11:03.907975 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:11:03.970260 systemd-fsck[1191]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:11:03.974177 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:11:03.980085 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:11:04.088654 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:11:04.089109 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:11:04.090316 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:11:04.098199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:11:04.101805 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:11:04.103262 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:11:04.103346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:11:04.103388 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:11:04.117299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:11:04.121663 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1210) Sep 13 00:11:04.128128 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:11:04.128206 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:11:04.128229 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:11:04.131080 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:11:04.141652 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:11:04.144153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:11:04.280106 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:11:04.287043 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:11:04.291906 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:11:04.297354 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:11:04.437201 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:11:04.443793 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:11:04.446299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:11:04.457045 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:11:04.459669 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:11:04.491744 ignition[1325]: INFO : Ignition 2.19.0 Sep 13 00:11:04.491744 ignition[1325]: INFO : Stage: mount Sep 13 00:11:04.491744 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:04.491744 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:04.491744 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:04.495325 ignition[1325]: INFO : PUT result: OK Sep 13 00:11:04.499101 ignition[1325]: INFO : mount: mount passed Sep 13 00:11:04.501049 ignition[1325]: INFO : Ignition finished successfully Sep 13 00:11:04.502087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:11:04.502809 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:11:04.507755 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:11:04.535122 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:11:04.554643 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1338) Sep 13 00:11:04.558735 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:11:04.558815 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:11:04.558838 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:11:04.565652 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:11:04.568446 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:11:04.591198 ignition[1355]: INFO : Ignition 2.19.0 Sep 13 00:11:04.591198 ignition[1355]: INFO : Stage: files Sep 13 00:11:04.592624 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:04.592624 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:04.592624 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:04.593920 ignition[1355]: INFO : PUT result: OK Sep 13 00:11:04.596068 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:11:04.596936 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:11:04.596936 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:11:04.604045 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:11:04.604831 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:11:04.604831 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:11:04.604416 unknown[1355]: wrote ssh authorized keys file for user: core Sep 13 00:11:04.606538 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:11:04.606538 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:11:04.683564 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:11:05.114588 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:11:05.114588 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:11:05.116656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:11:05.327813 systemd-networkd[1163]: eth0: Gained IPv6LL Sep 13 00:11:05.552239 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:11:06.080992 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:11:06.080992 ignition[1355]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:11:06.083304 ignition[1355]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:11:06.083304 ignition[1355]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:11:06.083304 ignition[1355]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:11:06.083304 ignition[1355]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:11:06.083304 ignition[1355]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:11:06.087563 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:11:06.087563 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:11:06.087563 ignition[1355]: INFO : files: files passed Sep 13 00:11:06.087563 ignition[1355]: INFO : Ignition finished successfully Sep 13 00:11:06.085278 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:11:06.091801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:11:06.093920 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:11:06.097348 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:11:06.097864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:11:06.108640 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:11:06.108640 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:11:06.111591 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:11:06.113964 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:11:06.114599 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:11:06.120834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:11:06.146782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:11:06.146994 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:11:06.148330 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:11:06.149457 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:11:06.150343 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:11:06.157800 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:11:06.170954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:11:06.179949 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:11:06.192089 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:11:06.192784 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:11:06.193806 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:11:06.194647 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:11:06.194831 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:11:06.196152 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:11:06.196981 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:11:06.197755 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:11:06.198514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:11:06.199410 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:11:06.200193 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:11:06.200958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:11:06.201756 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:11:06.202954 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:11:06.203746 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:11:06.204442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:11:06.204643 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:11:06.205727 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:11:06.206503 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:11:06.207307 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:11:06.208047 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:11:06.208505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:11:06.208701 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:11:06.210136 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:11:06.210317 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:11:06.211111 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:11:06.211270 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:11:06.218276 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:11:06.221939 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:11:06.223872 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:11:06.224107 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:11:06.226916 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:11:06.227104 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:11:06.237475 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:11:06.238735 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:11:06.240726 ignition[1407]: INFO : Ignition 2.19.0 Sep 13 00:11:06.240726 ignition[1407]: INFO : Stage: umount Sep 13 00:11:06.240726 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:11:06.240726 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:11:06.247298 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:11:06.247298 ignition[1407]: INFO : PUT result: OK Sep 13 00:11:06.247298 ignition[1407]: INFO : umount: umount passed Sep 13 00:11:06.247298 ignition[1407]: INFO : Ignition finished successfully Sep 13 00:11:06.247817 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:11:06.248084 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:11:06.249004 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:11:06.249053 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:11:06.250074 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:11:06.250122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:11:06.251315 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:11:06.251361 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:11:06.251792 systemd[1]: Stopped target network.target - Network. Sep 13 00:11:06.252414 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:11:06.252462 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:11:06.253752 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:11:06.254527 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:11:06.255215 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:11:06.257680 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:11:06.258182 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:11:06.258714 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:11:06.258773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:11:06.259283 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:11:06.259350 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:11:06.261712 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:11:06.261786 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:11:06.262861 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:11:06.262931 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:11:06.263556 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:11:06.264114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:11:06.267284 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:11:06.272705 systemd-networkd[1163]: eth0: DHCPv6 lease lost Sep 13 00:11:06.274002 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:11:06.274543 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:11:06.277581 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:11:06.277766 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:11:06.279915 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:11:06.279977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:11:06.285768 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:11:06.287174 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:11:06.287807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:11:06.289334 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:11:06.289404 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:11:06.290073 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:11:06.290137 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:11:06.291432 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:11:06.291478 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:11:06.292281 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:11:06.306853 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:11:06.307889 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:11:06.309048 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:11:06.309241 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:11:06.311291 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:11:06.311360 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:11:06.312041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:11:06.312090 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:11:06.312801 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:11:06.312867 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:11:06.313935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:11:06.314000 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:11:06.315268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:11:06.315336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:11:06.322938 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:11:06.323746 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:11:06.323842 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:11:06.324595 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:11:06.324672 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:11:06.325298 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:11:06.325360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:11:06.327753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:11:06.327827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:11:06.333866 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:11:06.334957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:11:06.408429 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:11:06.408540 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:11:06.409932 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:11:06.410376 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:11:06.410447 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:11:06.421905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:11:06.431708 systemd[1]: Switching root. Sep 13 00:11:06.464051 systemd-journald[178]: Journal stopped Sep 13 00:11:07.893142 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 13 00:11:07.893257 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:11:07.893283 kernel: SELinux: policy capability open_perms=1 Sep 13 00:11:07.893315 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:11:07.893345 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:11:07.893364 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:11:07.893385 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:11:07.893405 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:11:07.893422 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:11:07.893440 kernel: audit: type=1403 audit(1757722266.727:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:11:07.893459 systemd[1]: Successfully loaded SELinux policy in 54.990ms. Sep 13 00:11:07.893486 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.705ms. Sep 13 00:11:07.893508 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:11:07.893534 systemd[1]: Detected virtualization amazon. Sep 13 00:11:07.893555 systemd[1]: Detected architecture x86-64. Sep 13 00:11:07.893580 systemd[1]: Detected first boot. Sep 13 00:11:07.893605 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:11:07.893641 zram_generator::config[1450]: No configuration found. Sep 13 00:11:07.893660 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:11:07.893678 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:11:07.893696 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:11:07.893719 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:11:07.893739 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:11:07.894229 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:11:07.894270 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:11:07.894292 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:11:07.894314 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:11:07.894336 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:11:07.894358 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:11:07.894388 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:11:07.894411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:11:07.894441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:11:07.894464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:11:07.894487 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:11:07.894512 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:11:07.894534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:11:07.894555 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:11:07.894578 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:11:07.894605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:11:07.895706 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:11:07.895733 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:11:07.895754 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:11:07.895776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:11:07.895798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:11:07.895821 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:11:07.895844 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:11:07.895873 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:11:07.895896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:11:07.895921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:11:07.895943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:11:07.895965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:11:07.895987 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:11:07.896010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:11:07.896034 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:11:07.896056 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:11:07.896083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:07.896106 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:11:07.896129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:11:07.896152 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:11:07.896175 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:11:07.896199 systemd[1]: Reached target machines.target - Containers. Sep 13 00:11:07.896227 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:11:07.896251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:11:07.896279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:11:07.896302 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:11:07.896325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:11:07.896347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:11:07.896369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:11:07.896392 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:11:07.896414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:11:07.896438 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:11:07.896459 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:11:07.896485 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:11:07.896508 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:11:07.896530 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:11:07.896552 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:11:07.896575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:11:07.896598 kernel: fuse: init (API version 7.39) Sep 13 00:11:07.897656 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:11:07.897693 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:11:07.897717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:11:07.897747 kernel: loop: module loaded Sep 13 00:11:07.897771 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:11:07.897795 systemd[1]: Stopped verity-setup.service. Sep 13 00:11:07.897819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:07.897842 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:11:07.897865 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:11:07.897888 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:11:07.897911 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:11:07.897934 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:11:07.897962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:11:07.897986 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:11:07.898009 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:11:07.898032 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:11:07.898059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:11:07.898082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:11:07.898101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:11:07.898127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:11:07.898149 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:11:07.898171 kernel: ACPI: bus type drm_connector registered Sep 13 00:11:07.898196 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:11:07.898216 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:11:07.898237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:11:07.898260 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:11:07.898280 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:11:07.898301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:11:07.898321 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:11:07.898385 systemd-journald[1535]: Collecting audit messages is disabled. Sep 13 00:11:07.898436 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:11:07.898459 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:11:07.898484 systemd-journald[1535]: Journal started Sep 13 00:11:07.898522 systemd-journald[1535]: Runtime Journal (/run/log/journal/ec2ca91c941f11cda731e8ae51cbbcdc) is 4.7M, max 38.2M, 33.4M free. Sep 13 00:11:07.902670 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:11:07.492535 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:11:07.523044 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 13 00:11:07.523467 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:11:07.919649 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:11:07.924960 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:11:07.928167 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:11:07.932660 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:11:07.942637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:11:07.952413 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:11:07.956654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:11:07.964649 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:11:07.970644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:11:07.982582 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:11:07.985641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:11:07.997644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:11:08.005684 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:11:08.020683 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:11:08.030661 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:11:08.038179 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:11:08.039806 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:11:08.042132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:11:08.043360 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:11:08.045590 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:11:08.047209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:11:08.054237 kernel: loop0: detected capacity change from 0 to 61336 Sep 13 00:11:08.089743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:11:08.099449 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:11:08.112158 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:11:08.125234 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:11:08.136730 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:11:08.134928 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Sep 13 00:11:08.134950 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Sep 13 00:11:08.151156 systemd-journald[1535]: Time spent on flushing to /var/log/journal/ec2ca91c941f11cda731e8ae51cbbcdc is 72.789ms for 995 entries. Sep 13 00:11:08.151156 systemd-journald[1535]: System Journal (/var/log/journal/ec2ca91c941f11cda731e8ae51cbbcdc) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:11:08.243486 systemd-journald[1535]: Received client request to flush runtime journal. Sep 13 00:11:08.243557 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:11:08.141838 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:11:08.155258 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:11:08.162945 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:11:08.166300 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:11:08.167255 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:11:08.213422 udevadm[1593]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:11:08.248218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:11:08.268625 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:11:08.278817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:11:08.282657 kernel: loop2: detected capacity change from 0 to 229808 Sep 13 00:11:08.299317 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Sep 13 00:11:08.299679 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Sep 13 00:11:08.305701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:11:08.441650 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:11:08.545091 kernel: loop4: detected capacity change from 0 to 61336 Sep 13 00:11:08.564642 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:11:08.587656 kernel: loop6: detected capacity change from 0 to 229808 Sep 13 00:11:08.629728 kernel: loop7: detected capacity change from 0 to 142488 Sep 13 00:11:08.661455 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 13 00:11:08.663190 (sd-merge)[1608]: Merged extensions into '/usr'. Sep 13 00:11:08.671320 systemd[1]: Reloading requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:11:08.671466 systemd[1]: Reloading... Sep 13 00:11:08.804646 zram_generator::config[1637]: No configuration found. Sep 13 00:11:09.025071 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:09.136052 systemd[1]: Reloading finished in 463 ms. Sep 13 00:11:09.180675 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:11:09.191030 systemd[1]: Starting ensure-sysext.service... Sep 13 00:11:09.200886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:11:09.219602 systemd[1]: Reloading requested from client PID 1685 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:11:09.219769 systemd[1]: Reloading... Sep 13 00:11:09.239440 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:11:09.241077 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:11:09.244344 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:11:09.245032 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Sep 13 00:11:09.245134 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Sep 13 00:11:09.256094 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:11:09.256115 systemd-tmpfiles[1686]: Skipping /boot Sep 13 00:11:09.299954 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:11:09.300155 systemd-tmpfiles[1686]: Skipping /boot Sep 13 00:11:09.324708 zram_generator::config[1711]: No configuration found. Sep 13 00:11:09.391639 ldconfig[1557]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:11:09.484048 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:09.539312 systemd[1]: Reloading finished in 318 ms. Sep 13 00:11:09.558954 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:11:09.559836 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:11:09.566174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:11:09.580985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:11:09.584273 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:11:09.589081 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:11:09.596388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:11:09.605018 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:11:09.609740 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:11:09.615384 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:09.616187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:11:09.626058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:11:09.630602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:11:09.635361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:11:09.636173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:11:09.636581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:09.640383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:09.641976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:11:09.642221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:11:09.642354 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:09.656209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:09.656677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:11:09.667078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:11:09.669868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:11:09.670338 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:11:09.676693 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:11:09.678720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:11:09.682712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:11:09.684170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:11:09.689021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:11:09.689235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:11:09.697725 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:11:09.706976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:11:09.708553 systemd[1]: Finished ensure-sysext.service. Sep 13 00:11:09.727696 systemd-udevd[1774]: Using default interface naming scheme 'v255'. Sep 13 00:11:09.731805 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:11:09.732201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:11:09.734504 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:11:09.734972 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:11:09.738069 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:11:09.760710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:11:09.776186 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:11:09.784962 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:11:09.806309 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:11:09.809239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:11:09.818711 augenrules[1809]: No rules Sep 13 00:11:09.819293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:11:09.821678 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:11:09.829964 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:11:09.831036 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:11:09.951761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:11:09.957808 (udev-worker)[1827]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:11:09.991357 systemd-networkd[1817]: lo: Link UP Sep 13 00:11:09.991368 systemd-networkd[1817]: lo: Gained carrier Sep 13 00:11:09.996286 systemd-networkd[1817]: Enumeration completed Sep 13 00:11:09.996448 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:11:09.997568 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:11:09.997574 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:11:10.006304 systemd-resolved[1773]: Positive Trust Anchors: Sep 13 00:11:10.007023 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:11:10.009935 systemd-resolved[1773]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:11:10.010222 systemd-resolved[1773]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:11:10.011835 systemd-networkd[1817]: eth0: Link UP Sep 13 00:11:10.012104 systemd-networkd[1817]: eth0: Gained carrier Sep 13 00:11:10.012131 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:11:10.022767 systemd-resolved[1773]: Defaulting to hostname 'linux'. Sep 13 00:11:10.022957 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.31.45/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:11:10.030743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:11:10.031827 systemd[1]: Reached target network.target - Network. Sep 13 00:11:10.032401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:11:10.067633 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:11:10.073693 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:11:10.079242 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:11:10.091688 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:11:10.103647 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:11:10.104466 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:11:10.110948 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:11:10.139591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:11:10.172498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:11:10.173551 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:11:10.184303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:11:10.191814 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:11:10.201642 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1811) Sep 13 00:11:10.305403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:11:10.351330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 13 00:11:10.356929 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:11:10.358222 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:11:10.365970 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:11:10.377241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:11:10.386663 lvm[1937]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:11:10.418633 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:11:10.419438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:11:10.420084 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:11:10.420638 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:11:10.421074 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:11:10.421651 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:11:10.422091 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:11:10.422465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:11:10.423014 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:11:10.423051 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:11:10.423382 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:11:10.425577 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:11:10.427705 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:11:10.432868 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:11:10.434944 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:11:10.436429 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:11:10.437255 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:11:10.437929 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:11:10.438659 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:11:10.438705 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:11:10.449724 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:11:10.452876 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:11:10.455842 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:11:10.457781 lvm[1943]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:11:10.464895 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:11:10.478880 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:11:10.479648 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:11:10.492796 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:11:10.514665 systemd[1]: Started ntpd.service - Network Time Service. Sep 13 00:11:10.527765 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:11:10.536801 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 13 00:11:10.542896 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:11:10.552310 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:11:10.564036 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:11:10.566262 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:11:10.566871 jq[1947]: false Sep 13 00:11:10.567548 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:11:10.569986 extend-filesystems[1948]: Found loop4 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found loop5 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found loop6 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found loop7 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p1 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p2 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p3 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found usr Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p4 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p6 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p7 Sep 13 00:11:10.572935 extend-filesystems[1948]: Found nvme0n1p9 Sep 13 00:11:10.572935 extend-filesystems[1948]: Checking size of /dev/nvme0n1p9 Sep 13 00:11:10.575860 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:11:10.587866 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:11:10.592287 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:11:10.601164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:11:10.601438 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:11:10.660128 jq[1961]: true Sep 13 00:11:10.673280 dbus-daemon[1946]: [system] SELinux support is enabled Sep 13 00:11:10.674298 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:11:10.680005 update_engine[1959]: I20250913 00:11:10.679396 1959 main.cc:92] Flatcar Update Engine starting Sep 13 00:11:10.680081 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:11:10.680156 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:11:10.682681 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:11:10.682719 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:11:10.683897 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:11:10.684520 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:11:10.689394 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: ---------------------------------------------------- Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: corporation. Support and training for ntp-4 are Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: available at https://www.nwtime.org/support Sep 13 00:11:10.690525 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: ---------------------------------------------------- Sep 13 00:11:10.689133 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:11:10.689687 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:11:10.689161 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:11:10.690274 (ntainerd)[1973]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:11:10.689174 ntpd[1951]: ---------------------------------------------------- Sep 13 00:11:10.698817 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: proto: precision = 0.062 usec (-24) Sep 13 00:11:10.689184 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:11:10.689195 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:11:10.689204 ntpd[1951]: corporation. Support and training for ntp-4 are Sep 13 00:11:10.689215 ntpd[1951]: available at https://www.nwtime.org/support Sep 13 00:11:10.689226 ntpd[1951]: ---------------------------------------------------- Sep 13 00:11:10.695996 ntpd[1951]: proto: precision = 0.062 usec (-24) Sep 13 00:11:10.701680 ntpd[1951]: basedate set to 2025-08-31 Sep 13 00:11:10.710796 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: basedate set to 2025-08-31 Sep 13 00:11:10.710796 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: gps base set to 2025-08-31 (week 2382) Sep 13 00:11:10.701707 ntpd[1951]: gps base set to 2025-08-31 (week 2382) Sep 13 00:11:10.704863 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:11:10.715024 update_engine[1959]: I20250913 00:11:10.712160 1959 update_check_scheduler.cc:74] Next update check in 11m32s Sep 13 00:11:10.715133 extend-filesystems[1948]: Resized partition /dev/nvme0n1p9 Sep 13 00:11:10.715871 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 00:11:10.716431 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:11:10.719820 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:11:10.730013 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:11:10.733218 extend-filesystems[1995]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Listen normally on 3 eth0 172.31.31.45:123 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Listen normally on 4 lo [::1]:123 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: bind(21) AF_INET6 fe80::45a:fbff:fedf:6e5f%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: unable to create socket on eth0 (5) for fe80::45a:fbff:fedf:6e5f%2#123 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: failed to init interface for address fe80::45a:fbff:fedf:6e5f%2 Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:11:10.736824 ntpd[1951]: 13 Sep 00:11:10 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:11:10.730086 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:11:10.730299 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:11:10.730340 ntpd[1951]: Listen normally on 3 eth0 172.31.31.45:123 Sep 13 00:11:10.730382 ntpd[1951]: Listen normally on 4 lo [::1]:123 Sep 13 00:11:10.730432 ntpd[1951]: bind(21) AF_INET6 fe80::45a:fbff:fedf:6e5f%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:11:10.730455 ntpd[1951]: unable to create socket on eth0 (5) for fe80::45a:fbff:fedf:6e5f%2#123 Sep 13 00:11:10.730471 ntpd[1951]: failed to init interface for address fe80::45a:fbff:fedf:6e5f%2 Sep 13 00:11:10.730505 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Sep 13 00:11:10.733172 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:11:10.733205 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:11:10.741272 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 13 00:11:10.752761 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:11:10.770561 tar[1983]: linux-amd64/LICENSE Sep 13 00:11:10.770561 tar[1983]: linux-amd64/helm Sep 13 00:11:10.774478 coreos-metadata[1945]: Sep 13 00:11:10.772 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:11:10.774478 coreos-metadata[1945]: Sep 13 00:11:10.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 13 00:11:10.777094 coreos-metadata[1945]: Sep 13 00:11:10.775 INFO Fetch successful Sep 13 00:11:10.777094 coreos-metadata[1945]: Sep 13 00:11:10.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 13 00:11:10.777094 coreos-metadata[1945]: Sep 13 00:11:10.776 INFO Fetch successful Sep 13 00:11:10.777094 coreos-metadata[1945]: Sep 13 00:11:10.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 13 00:11:10.780113 coreos-metadata[1945]: Sep 13 00:11:10.778 INFO Fetch successful Sep 13 00:11:10.780113 coreos-metadata[1945]: Sep 13 00:11:10.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 13 00:11:10.780113 coreos-metadata[1945]: Sep 13 00:11:10.779 INFO Fetch successful Sep 13 00:11:10.780113 coreos-metadata[1945]: Sep 13 00:11:10.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 13 00:11:10.780395 jq[1985]: true Sep 13 00:11:10.782028 coreos-metadata[1945]: Sep 13 00:11:10.781 INFO Fetch failed with 404: resource not found Sep 13 00:11:10.782028 coreos-metadata[1945]: Sep 13 00:11:10.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 13 00:11:10.783039 coreos-metadata[1945]: Sep 13 00:11:10.782 INFO Fetch successful Sep 13 00:11:10.783039 coreos-metadata[1945]: Sep 13 00:11:10.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 13 00:11:10.783724 coreos-metadata[1945]: Sep 13 00:11:10.783 INFO Fetch successful Sep 13 00:11:10.783866 coreos-metadata[1945]: Sep 13 00:11:10.783 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 13 00:11:10.784854 coreos-metadata[1945]: Sep 13 00:11:10.784 INFO Fetch successful Sep 13 00:11:10.784854 coreos-metadata[1945]: Sep 13 00:11:10.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 13 00:11:10.788368 coreos-metadata[1945]: Sep 13 00:11:10.788 INFO Fetch successful Sep 13 00:11:10.792200 coreos-metadata[1945]: Sep 13 00:11:10.790 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 13 00:11:10.795154 coreos-metadata[1945]: Sep 13 00:11:10.793 INFO Fetch successful Sep 13 00:11:10.858201 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:11:10.875439 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:11:10.875439 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:11:10.875439 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:11:10.901432 extend-filesystems[1948]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:11:10.880752 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:11:10.881024 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:11:10.883165 systemd-logind[1958]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:11:10.883190 systemd-logind[1958]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:11:10.883215 systemd-logind[1958]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:11:10.896896 systemd-logind[1958]: New seat seat0. Sep 13 00:11:10.913297 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:11:10.915392 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:11:10.919417 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:11:11.022689 bash[2029]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:11:11.025723 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:11:11.036013 systemd[1]: Starting sshkeys.service... Sep 13 00:11:11.057643 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1812) Sep 13 00:11:11.123488 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:11:11.135856 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:11:11.146042 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:11:11.146234 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 00:11:11.154657 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1992 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:11:11.169292 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 00:11:11.241925 polkitd[2074]: Started polkitd version 121 Sep 13 00:11:11.273026 polkitd[2074]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:11:11.273124 polkitd[2074]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:11:11.278338 polkitd[2074]: Finished loading, compiling and executing 2 rules Sep 13 00:11:11.295819 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:11:11.316945 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:11:11.318733 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 00:11:11.329056 polkitd[2074]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:11:11.411625 systemd-resolved[1773]: System hostname changed to 'ip-172-31-31-45'. Sep 13 00:11:11.412166 systemd-hostnamed[1992]: Hostname set to (transient) Sep 13 00:11:11.450109 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:11:11.451645 coreos-metadata[2070]: Sep 13 00:11:11.451 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:11:11.454472 coreos-metadata[2070]: Sep 13 00:11:11.453 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 13 00:11:11.455832 coreos-metadata[2070]: Sep 13 00:11:11.455 INFO Fetch successful Sep 13 00:11:11.457484 coreos-metadata[2070]: Sep 13 00:11:11.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:11:11.458707 coreos-metadata[2070]: Sep 13 00:11:11.458 INFO Fetch successful Sep 13 00:11:11.462963 unknown[2070]: wrote ssh authorized keys file for user: core Sep 13 00:11:11.528410 containerd[1973]: time="2025-09-13T00:11:11.527754500Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:11:11.555109 update-ssh-keys[2135]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:11:11.557145 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:11:11.564569 systemd[1]: Finished sshkeys.service. Sep 13 00:11:11.592144 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:11:11.599849 systemd-networkd[1817]: eth0: Gained IPv6LL Sep 13 00:11:11.601047 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:11:11.609815 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:11:11.611052 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:11:11.619144 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 13 00:11:11.632536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:11.642064 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:11:11.662086 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:11:11.662319 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:11:11.678924 containerd[1973]: time="2025-09-13T00:11:11.671497656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.674548 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:11:11.684455 containerd[1973]: time="2025-09-13T00:11:11.683874241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:11.684455 containerd[1973]: time="2025-09-13T00:11:11.684267927Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:11:11.684455 containerd[1973]: time="2025-09-13T00:11:11.684306141Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:11:11.684679 containerd[1973]: time="2025-09-13T00:11:11.684507281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:11:11.684679 containerd[1973]: time="2025-09-13T00:11:11.684536813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.684679 containerd[1973]: time="2025-09-13T00:11:11.684642525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:11.684679 containerd[1973]: time="2025-09-13T00:11:11.684665667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.686704 containerd[1973]: time="2025-09-13T00:11:11.684936815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:11.686704 containerd[1973]: time="2025-09-13T00:11:11.684974426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.686704 containerd[1973]: time="2025-09-13T00:11:11.685006728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:11.686704 containerd[1973]: time="2025-09-13T00:11:11.685030415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.686704 containerd[1973]: time="2025-09-13T00:11:11.685194308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.686704 containerd[1973]: time="2025-09-13T00:11:11.685509844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:11.690756 containerd[1973]: time="2025-09-13T00:11:11.689663306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:11.690756 containerd[1973]: time="2025-09-13T00:11:11.689716281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:11:11.690756 containerd[1973]: time="2025-09-13T00:11:11.689891698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:11:11.690756 containerd[1973]: time="2025-09-13T00:11:11.689975196Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.700783046Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.700866837Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.700893127Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.700917192Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.700950356Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701164027Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701532623Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701726168Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701754591Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701775604Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701800984Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701823655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701844005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.702454 containerd[1973]: time="2025-09-13T00:11:11.701866378Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.701890316Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.701910885Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.701931449Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.701952481Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.701984170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702007894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702027571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702051374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702072912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702097255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702121166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702145135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702181962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703049 containerd[1973]: time="2025-09-13T00:11:11.702205762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702228023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702250736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702271413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702301749Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702334034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702353888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702371698Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702456687Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702484879Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702500082Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702516420Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702529017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702547532Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:11:11.703597 containerd[1973]: time="2025-09-13T00:11:11.702568026Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:11:11.708045 containerd[1973]: time="2025-09-13T00:11:11.702582526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:11:11.708104 containerd[1973]: time="2025-09-13T00:11:11.705477471Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:11:11.708104 containerd[1973]: time="2025-09-13T00:11:11.705580893Z" level=info msg="Connect containerd service" Sep 13 00:11:11.708104 containerd[1973]: time="2025-09-13T00:11:11.705667961Z" level=info msg="using legacy CRI server" Sep 13 00:11:11.708104 containerd[1973]: time="2025-09-13T00:11:11.705682023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:11:11.708104 containerd[1973]: time="2025-09-13T00:11:11.705856666Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711020232Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711480076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711536254Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711587509Z" level=info msg="Start subscribing containerd event" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711652472Z" level=info msg="Start recovering state" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711737029Z" level=info msg="Start event monitor" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711750410Z" level=info msg="Start snapshots syncer" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711763833Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711775234Z" level=info msg="Start streaming server" Sep 13 00:11:11.717651 containerd[1973]: time="2025-09-13T00:11:11.711855125Z" level=info msg="containerd successfully booted in 0.186607s" Sep 13 00:11:11.713077 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:11:11.741971 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:11:11.744394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:11:11.757436 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:11:11.768829 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:11:11.770240 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:11:11.783127 amazon-ssm-agent[2158]: Initializing new seelog logger Sep 13 00:11:11.783501 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Sep 13 00:11:11.783501 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.783501 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.784384 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 processing appconfig overrides Sep 13 00:11:11.784495 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.784495 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.784568 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 processing appconfig overrides Sep 13 00:11:11.785061 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.785061 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.785061 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 processing appconfig overrides Sep 13 00:11:11.785634 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO Proxy environment variables: Sep 13 00:11:11.791661 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.791661 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:11:11.791661 amazon-ssm-agent[2158]: 2025/09/13 00:11:11 processing appconfig overrides Sep 13 00:11:11.886657 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO no_proxy: Sep 13 00:11:11.984532 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO https_proxy: Sep 13 00:11:12.083459 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO http_proxy: Sep 13 00:11:12.165817 tar[1983]: linux-amd64/README.md Sep 13 00:11:12.181219 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO Checking if agent identity type OnPrem can be assumed Sep 13 00:11:12.184490 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:11:12.279896 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO Checking if agent identity type EC2 can be assumed Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO Agent will take identity from EC2 Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] Starting Core Agent Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [Registrar] Starting registrar module Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:12 INFO [EC2Identity] EC2 registration was successful. Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:12 INFO [CredentialRefresher] credentialRefresher has started Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:12 INFO [CredentialRefresher] Starting credentials refresher loop Sep 13 00:11:12.298135 amazon-ssm-agent[2158]: 2025-09-13 00:11:12 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 13 00:11:12.378737 amazon-ssm-agent[2158]: 2025-09-13 00:11:12 INFO [CredentialRefresher] Next credential rotation will be in 31.44999239988333 minutes Sep 13 00:11:13.313347 amazon-ssm-agent[2158]: 2025-09-13 00:11:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 13 00:11:13.415641 amazon-ssm-agent[2158]: 2025-09-13 00:11:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2191) started Sep 13 00:11:13.515687 amazon-ssm-agent[2158]: 2025-09-13 00:11:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 13 00:11:13.689699 ntpd[1951]: Listen normally on 6 eth0 [fe80::45a:fbff:fedf:6e5f%2]:123 Sep 13 00:11:13.690195 ntpd[1951]: 13 Sep 00:11:13 ntpd[1951]: Listen normally on 6 eth0 [fe80::45a:fbff:fedf:6e5f%2]:123 Sep 13 00:11:14.036014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:14.037438 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:11:14.039217 systemd[1]: Startup finished in 600ms (kernel) + 7.012s (initrd) + 7.365s (userspace) = 14.978s. Sep 13 00:11:14.046134 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:14.280198 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:11:14.287522 systemd[1]: Started sshd@0-172.31.31.45:22-139.178.89.65:48424.service - OpenSSH per-connection server daemon (139.178.89.65:48424). Sep 13 00:11:14.464540 sshd[2213]: Accepted publickey for core from 139.178.89.65 port 48424 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:14.467200 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:14.478048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:11:14.486020 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:11:14.489295 systemd-logind[1958]: New session 1 of user core. Sep 13 00:11:14.506850 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:11:14.515420 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:11:14.520431 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:14.679675 systemd[2221]: Queued start job for default target default.target. Sep 13 00:11:14.690739 systemd[2221]: Created slice app.slice - User Application Slice. Sep 13 00:11:14.690775 systemd[2221]: Reached target paths.target - Paths. Sep 13 00:11:14.690790 systemd[2221]: Reached target timers.target - Timers. Sep 13 00:11:14.693801 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:11:14.708443 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:11:14.708599 systemd[2221]: Reached target sockets.target - Sockets. Sep 13 00:11:14.708651 systemd[2221]: Reached target basic.target - Basic System. Sep 13 00:11:14.708711 systemd[2221]: Reached target default.target - Main User Target. Sep 13 00:11:14.708753 systemd[2221]: Startup finished in 180ms. Sep 13 00:11:14.709260 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:11:14.714967 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:11:14.867120 systemd[1]: Started sshd@1-172.31.31.45:22-139.178.89.65:48434.service - OpenSSH per-connection server daemon (139.178.89.65:48434). Sep 13 00:11:15.038363 sshd[2232]: Accepted publickey for core from 139.178.89.65 port 48434 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:15.040249 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:15.045322 systemd-logind[1958]: New session 2 of user core. Sep 13 00:11:15.051835 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:11:15.177323 sshd[2232]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:15.183785 systemd-logind[1958]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:11:15.184492 systemd[1]: sshd@1-172.31.31.45:22-139.178.89.65:48434.service: Deactivated successfully. Sep 13 00:11:15.188964 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:11:15.192195 systemd-logind[1958]: Removed session 2. Sep 13 00:11:15.217006 systemd[1]: Started sshd@2-172.31.31.45:22-139.178.89.65:48438.service - OpenSSH per-connection server daemon (139.178.89.65:48438). Sep 13 00:11:15.274101 kubelet[2207]: E0913 00:11:15.274003 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:15.277085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:15.277362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:15.277920 systemd[1]: kubelet.service: Consumed 1.184s CPU time. Sep 13 00:11:15.378413 sshd[2240]: Accepted publickey for core from 139.178.89.65 port 48438 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:15.379810 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:15.385186 systemd-logind[1958]: New session 3 of user core. Sep 13 00:11:15.390938 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:11:15.508651 sshd[2240]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:15.512266 systemd[1]: sshd@2-172.31.31.45:22-139.178.89.65:48438.service: Deactivated successfully. Sep 13 00:11:15.514337 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:11:15.516178 systemd-logind[1958]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:11:15.517249 systemd-logind[1958]: Removed session 3. Sep 13 00:11:15.544356 systemd[1]: Started sshd@3-172.31.31.45:22-139.178.89.65:48446.service - OpenSSH per-connection server daemon (139.178.89.65:48446). Sep 13 00:11:15.708112 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 48446 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:15.709571 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:15.714594 systemd-logind[1958]: New session 4 of user core. Sep 13 00:11:15.723873 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:11:15.842446 sshd[2248]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:15.845153 systemd[1]: sshd@3-172.31.31.45:22-139.178.89.65:48446.service: Deactivated successfully. Sep 13 00:11:15.847203 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:11:15.848803 systemd-logind[1958]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:11:15.850126 systemd-logind[1958]: Removed session 4. Sep 13 00:11:15.880037 systemd[1]: Started sshd@4-172.31.31.45:22-139.178.89.65:48450.service - OpenSSH per-connection server daemon (139.178.89.65:48450). Sep 13 00:11:16.044480 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 48450 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:16.046238 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:16.051682 systemd-logind[1958]: New session 5 of user core. Sep 13 00:11:16.056943 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:11:16.187393 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:11:16.187842 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:11:16.202686 sudo[2258]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:16.226933 sshd[2255]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:16.235116 systemd[1]: sshd@4-172.31.31.45:22-139.178.89.65:48450.service: Deactivated successfully. Sep 13 00:11:16.238846 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:11:16.240812 systemd-logind[1958]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:11:16.242344 systemd-logind[1958]: Removed session 5. Sep 13 00:11:16.271226 systemd[1]: Started sshd@5-172.31.31.45:22-139.178.89.65:48460.service - OpenSSH per-connection server daemon (139.178.89.65:48460). Sep 13 00:11:16.424658 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 48460 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:16.426423 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:16.432087 systemd-logind[1958]: New session 6 of user core. Sep 13 00:11:16.441902 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:11:16.541551 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:11:16.541987 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:11:16.546499 sudo[2267]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:16.552377 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:11:16.552933 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:11:16.567056 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:11:16.571080 auditctl[2270]: No rules Sep 13 00:11:16.571626 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:11:16.571857 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:11:16.574600 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:11:16.616582 augenrules[2288]: No rules Sep 13 00:11:16.618147 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:11:16.619464 sudo[2266]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:16.642752 sshd[2263]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:16.647259 systemd[1]: sshd@5-172.31.31.45:22-139.178.89.65:48460.service: Deactivated successfully. Sep 13 00:11:16.649229 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:11:16.650937 systemd-logind[1958]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:11:16.652201 systemd-logind[1958]: Removed session 6. Sep 13 00:11:16.680042 systemd[1]: Started sshd@6-172.31.31.45:22-139.178.89.65:48462.service - OpenSSH per-connection server daemon (139.178.89.65:48462). Sep 13 00:11:16.838251 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 48462 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:11:16.840031 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:16.844677 systemd-logind[1958]: New session 7 of user core. Sep 13 00:11:16.851837 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:11:16.949439 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:11:16.949777 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:11:17.479020 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:11:17.490271 (dockerd)[2314]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:11:18.828712 systemd-resolved[1773]: Clock change detected. Flushing caches. Sep 13 00:11:19.196293 dockerd[2314]: time="2025-09-13T00:11:19.196228655Z" level=info msg="Starting up" Sep 13 00:11:19.492675 systemd[1]: var-lib-docker-metacopy\x2dcheck491068593-merged.mount: Deactivated successfully. Sep 13 00:11:19.516520 dockerd[2314]: time="2025-09-13T00:11:19.516132686Z" level=info msg="Loading containers: start." Sep 13 00:11:19.672496 kernel: Initializing XFRM netlink socket Sep 13 00:11:19.723010 (udev-worker)[2338]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:11:19.779382 systemd-networkd[1817]: docker0: Link UP Sep 13 00:11:19.793105 dockerd[2314]: time="2025-09-13T00:11:19.793055425Z" level=info msg="Loading containers: done." Sep 13 00:11:19.812280 dockerd[2314]: time="2025-09-13T00:11:19.812210806Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:11:19.812475 dockerd[2314]: time="2025-09-13T00:11:19.812344909Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:11:19.812509 dockerd[2314]: time="2025-09-13T00:11:19.812483869Z" level=info msg="Daemon has completed initialization" Sep 13 00:11:19.854338 dockerd[2314]: time="2025-09-13T00:11:19.853637303Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:11:19.853927 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:11:21.098260 containerd[1973]: time="2025-09-13T00:11:21.098219777Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:11:21.632712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440478898.mount: Deactivated successfully. Sep 13 00:11:24.078683 containerd[1973]: time="2025-09-13T00:11:24.078617796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:24.080186 containerd[1973]: time="2025-09-13T00:11:24.080125519Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 00:11:24.082330 containerd[1973]: time="2025-09-13T00:11:24.081314896Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:24.086281 containerd[1973]: time="2025-09-13T00:11:24.085832794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:24.086968 containerd[1973]: time="2025-09-13T00:11:24.086931442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.988661789s" Sep 13 00:11:24.087048 containerd[1973]: time="2025-09-13T00:11:24.086972425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:11:24.087482 containerd[1973]: time="2025-09-13T00:11:24.087447224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:11:26.218306 containerd[1973]: time="2025-09-13T00:11:26.218257295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:26.227087 containerd[1973]: time="2025-09-13T00:11:26.226265388Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 00:11:26.227087 containerd[1973]: time="2025-09-13T00:11:26.226754012Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:26.229878 containerd[1973]: time="2025-09-13T00:11:26.229813283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:26.231118 containerd[1973]: time="2025-09-13T00:11:26.230930790Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.143450162s" Sep 13 00:11:26.231118 containerd[1973]: time="2025-09-13T00:11:26.230975454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:11:26.231935 containerd[1973]: time="2025-09-13T00:11:26.231886925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:11:26.664491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:11:26.670739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:27.037893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:27.042990 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:27.088324 kubelet[2524]: E0913 00:11:27.088283 2524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:27.092386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:27.092618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:28.373928 containerd[1973]: time="2025-09-13T00:11:28.373866348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:28.375316 containerd[1973]: time="2025-09-13T00:11:28.375138275Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 00:11:28.378462 containerd[1973]: time="2025-09-13T00:11:28.378389405Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:28.382230 containerd[1973]: time="2025-09-13T00:11:28.381501506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:28.382577 containerd[1973]: time="2025-09-13T00:11:28.382550395Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.150429145s" Sep 13 00:11:28.382666 containerd[1973]: time="2025-09-13T00:11:28.382653838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:11:28.383598 containerd[1973]: time="2025-09-13T00:11:28.383488411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:11:29.726246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031937165.mount: Deactivated successfully. Sep 13 00:11:30.366447 containerd[1973]: time="2025-09-13T00:11:30.366355867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:30.368005 containerd[1973]: time="2025-09-13T00:11:30.367826983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 00:11:30.368969 containerd[1973]: time="2025-09-13T00:11:30.368930798Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:30.377590 containerd[1973]: time="2025-09-13T00:11:30.377477716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:30.380853 containerd[1973]: time="2025-09-13T00:11:30.379777601Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.996096098s" Sep 13 00:11:30.380853 containerd[1973]: time="2025-09-13T00:11:30.379836610Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:11:30.381525 containerd[1973]: time="2025-09-13T00:11:30.381488710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:11:30.850089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267711774.mount: Deactivated successfully. Sep 13 00:11:32.560166 containerd[1973]: time="2025-09-13T00:11:32.560088607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.563756 containerd[1973]: time="2025-09-13T00:11:32.563691615Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 00:11:32.568018 containerd[1973]: time="2025-09-13T00:11:32.567942466Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.576364 containerd[1973]: time="2025-09-13T00:11:32.576284888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.578546 containerd[1973]: time="2025-09-13T00:11:32.578334973Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.196802988s" Sep 13 00:11:32.578546 containerd[1973]: time="2025-09-13T00:11:32.578390461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:11:32.579735 containerd[1973]: time="2025-09-13T00:11:32.579697771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:11:33.041799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495516973.mount: Deactivated successfully. Sep 13 00:11:33.049408 containerd[1973]: time="2025-09-13T00:11:33.049350278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:33.050742 containerd[1973]: time="2025-09-13T00:11:33.050575481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:11:33.053398 containerd[1973]: time="2025-09-13T00:11:33.051963479Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:33.055450 containerd[1973]: time="2025-09-13T00:11:33.054678322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:33.055450 containerd[1973]: time="2025-09-13T00:11:33.055301060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.565636ms" Sep 13 00:11:33.055450 containerd[1973]: time="2025-09-13T00:11:33.055329240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:11:33.056878 containerd[1973]: time="2025-09-13T00:11:33.056848107Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:11:33.521674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119847075.mount: Deactivated successfully. Sep 13 00:11:36.476605 containerd[1973]: time="2025-09-13T00:11:36.476516965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:36.493553 containerd[1973]: time="2025-09-13T00:11:36.493451639Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 00:11:36.513218 containerd[1973]: time="2025-09-13T00:11:36.513124639Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:36.524561 containerd[1973]: time="2025-09-13T00:11:36.524508645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:36.525513 containerd[1973]: time="2025-09-13T00:11:36.525465324Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.468582007s" Sep 13 00:11:36.525513 containerd[1973]: time="2025-09-13T00:11:36.525501724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:11:37.155785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:11:37.160757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:37.474753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:37.478174 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:37.571392 kubelet[2684]: E0913 00:11:37.568044 2684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:37.571183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:37.571390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:40.970317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:40.976797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:41.014252 systemd[1]: Reloading requested from client PID 2699 ('systemctl') (unit session-7.scope)... Sep 13 00:11:41.014272 systemd[1]: Reloading... Sep 13 00:11:41.119457 zram_generator::config[2736]: No configuration found. Sep 13 00:11:41.303816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:41.392156 systemd[1]: Reloading finished in 377 ms. Sep 13 00:11:41.441977 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:11:41.442103 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:11:41.442925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:41.449914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:41.697942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:41.709852 (kubelet)[2805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:11:41.755900 kubelet[2805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:41.755900 kubelet[2805]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:11:41.755900 kubelet[2805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:41.759979 kubelet[2805]: I0913 00:11:41.758909 2805 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:11:42.557270 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:11:42.723677 kubelet[2805]: I0913 00:11:42.723607 2805 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:11:42.723677 kubelet[2805]: I0913 00:11:42.723667 2805 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:11:42.724039 kubelet[2805]: I0913 00:11:42.724012 2805 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:11:42.771153 kubelet[2805]: I0913 00:11:42.771045 2805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:11:42.775512 kubelet[2805]: E0913 00:11:42.775079 2805 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:11:42.794563 kubelet[2805]: E0913 00:11:42.794178 2805 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:11:42.794563 kubelet[2805]: I0913 00:11:42.794237 2805 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:11:42.804395 kubelet[2805]: I0913 00:11:42.804327 2805 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:11:42.809775 kubelet[2805]: I0913 00:11:42.809539 2805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:11:42.814137 kubelet[2805]: I0913 00:11:42.809603 2805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:11:42.817011 kubelet[2805]: I0913 00:11:42.816962 2805 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:11:42.817011 kubelet[2805]: I0913 00:11:42.817004 2805 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:11:42.817188 kubelet[2805]: I0913 00:11:42.817170 2805 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:42.822394 kubelet[2805]: I0913 00:11:42.821815 2805 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:11:42.822394 kubelet[2805]: I0913 00:11:42.821859 2805 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:11:42.822394 kubelet[2805]: I0913 00:11:42.821891 2805 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:11:42.822394 kubelet[2805]: I0913 00:11:42.821906 2805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:11:42.827951 kubelet[2805]: E0913 00:11:42.827609 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-45&limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:11:42.832455 kubelet[2805]: E0913 00:11:42.832067 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:11:42.832735 kubelet[2805]: I0913 00:11:42.832707 2805 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:11:42.833218 kubelet[2805]: I0913 00:11:42.833193 2805 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:11:42.834383 kubelet[2805]: W0913 00:11:42.834351 2805 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:11:42.842862 kubelet[2805]: I0913 00:11:42.841895 2805 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:11:42.842862 kubelet[2805]: I0913 00:11:42.841975 2805 server.go:1289] "Started kubelet" Sep 13 00:11:42.844909 kubelet[2805]: I0913 00:11:42.844844 2805 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:11:42.849458 kubelet[2805]: I0913 00:11:42.848084 2805 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:11:42.849458 kubelet[2805]: I0913 00:11:42.848956 2805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:11:42.849458 kubelet[2805]: I0913 00:11:42.849381 2805 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:11:42.854270 kubelet[2805]: E0913 00:11:42.850265 2805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.45:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-45.1864af1f22acc2ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-45,UID:ip-172-31-31-45,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-45,},FirstTimestamp:2025-09-13 00:11:42.841930413 +0000 UTC m=+1.127522140,LastTimestamp:2025-09-13 00:11:42.841930413 +0000 UTC m=+1.127522140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-45,}" Sep 13 00:11:42.855048 kubelet[2805]: I0913 00:11:42.855022 2805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:11:42.857610 kubelet[2805]: I0913 00:11:42.857587 2805 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:11:42.857865 kubelet[2805]: I0913 00:11:42.857850 2805 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:11:42.860573 kubelet[2805]: I0913 00:11:42.860245 2805 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:11:42.860782 kubelet[2805]: I0913 00:11:42.860768 2805 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:11:42.864099 kubelet[2805]: E0913 00:11:42.864062 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:11:42.870408 kubelet[2805]: E0913 00:11:42.870359 2805 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-45\" not found" Sep 13 00:11:42.870568 kubelet[2805]: E0913 00:11:42.870538 2805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="200ms" Sep 13 00:11:42.873764 kubelet[2805]: I0913 00:11:42.873731 2805 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:11:42.873764 kubelet[2805]: I0913 00:11:42.873751 2805 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:11:42.873957 kubelet[2805]: I0913 00:11:42.873866 2805 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:11:42.894285 kubelet[2805]: I0913 00:11:42.894232 2805 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:11:42.899037 kubelet[2805]: E0913 00:11:42.899004 2805 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:11:42.899143 kubelet[2805]: I0913 00:11:42.899099 2805 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:11:42.899143 kubelet[2805]: I0913 00:11:42.899116 2805 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:11:42.899258 kubelet[2805]: I0913 00:11:42.899157 2805 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:11:42.899258 kubelet[2805]: I0913 00:11:42.899167 2805 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:11:42.899258 kubelet[2805]: E0913 00:11:42.899214 2805 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:11:42.904922 kubelet[2805]: E0913 00:11:42.904854 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:11:42.922588 kubelet[2805]: I0913 00:11:42.922558 2805 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:11:42.922588 kubelet[2805]: I0913 00:11:42.922578 2805 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:11:42.922588 kubelet[2805]: I0913 00:11:42.922603 2805 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:42.925906 kubelet[2805]: I0913 00:11:42.925820 2805 policy_none.go:49] "None policy: Start" Sep 13 00:11:42.925906 kubelet[2805]: I0913 00:11:42.925851 2805 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:11:42.925906 kubelet[2805]: I0913 00:11:42.925864 2805 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:11:42.937183 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:11:42.952643 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:11:42.956186 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:11:42.966714 kubelet[2805]: E0913 00:11:42.966676 2805 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:11:42.967828 kubelet[2805]: I0913 00:11:42.967798 2805 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:11:42.967947 kubelet[2805]: I0913 00:11:42.967817 2805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:11:42.969809 kubelet[2805]: I0913 00:11:42.969698 2805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:11:42.970813 kubelet[2805]: E0913 00:11:42.970685 2805 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:11:42.970813 kubelet[2805]: E0913 00:11:42.970760 2805 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-45\" not found" Sep 13 00:11:42.970813 kubelet[2805]: E0913 00:11:42.970808 2805 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-45\" not found" Sep 13 00:11:43.013243 systemd[1]: Created slice kubepods-burstable-pod5d1047bc4c61b5b4f3f84b8344eec872.slice - libcontainer container kubepods-burstable-pod5d1047bc4c61b5b4f3f84b8344eec872.slice. Sep 13 00:11:43.033007 kubelet[2805]: E0913 00:11:43.032967 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:43.036553 systemd[1]: Created slice kubepods-burstable-pod846eac714dfffa76d28e49d0e4493bd9.slice - libcontainer container kubepods-burstable-pod846eac714dfffa76d28e49d0e4493bd9.slice. Sep 13 00:11:43.051601 kubelet[2805]: E0913 00:11:43.051259 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:43.061048 systemd[1]: Created slice kubepods-burstable-pod5320a754065da919d23ab3a0bbbbc52e.slice - libcontainer container kubepods-burstable-pod5320a754065da919d23ab3a0bbbbc52e.slice. Sep 13 00:11:43.069552 kubelet[2805]: I0913 00:11:43.061411 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d1047bc4c61b5b4f3f84b8344eec872-ca-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"5d1047bc4c61b5b4f3f84b8344eec872\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:43.069552 kubelet[2805]: I0913 00:11:43.061461 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d1047bc4c61b5b4f3f84b8344eec872-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"5d1047bc4c61b5b4f3f84b8344eec872\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:43.069552 kubelet[2805]: I0913 00:11:43.061490 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d1047bc4c61b5b4f3f84b8344eec872-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"5d1047bc4c61b5b4f3f84b8344eec872\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:43.069552 kubelet[2805]: I0913 00:11:43.061519 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:43.069552 kubelet[2805]: I0913 00:11:43.061544 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:43.069788 kubelet[2805]: I0913 00:11:43.061567 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:43.069788 kubelet[2805]: I0913 00:11:43.061592 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5320a754065da919d23ab3a0bbbbc52e-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-45\" (UID: \"5320a754065da919d23ab3a0bbbbc52e\") " pod="kube-system/kube-scheduler-ip-172-31-31-45" Sep 13 00:11:43.069788 kubelet[2805]: I0913 00:11:43.061615 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:43.069788 kubelet[2805]: I0913 00:11:43.061639 2805 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:43.070748 kubelet[2805]: E0913 00:11:43.070569 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:43.071273 kubelet[2805]: E0913 00:11:43.071178 2805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="400ms" Sep 13 00:11:43.076979 kubelet[2805]: I0913 00:11:43.076950 2805 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-45" Sep 13 00:11:43.077382 kubelet[2805]: E0913 00:11:43.077351 2805 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" Sep 13 00:11:43.280760 kubelet[2805]: I0913 00:11:43.280731 2805 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-45" Sep 13 00:11:43.281169 kubelet[2805]: E0913 00:11:43.281130 2805 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" Sep 13 00:11:43.334755 containerd[1973]: time="2025-09-13T00:11:43.334622549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-45,Uid:5d1047bc4c61b5b4f3f84b8344eec872,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:43.360838 containerd[1973]: time="2025-09-13T00:11:43.360788161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-45,Uid:846eac714dfffa76d28e49d0e4493bd9,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:43.371724 containerd[1973]: time="2025-09-13T00:11:43.371681902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-45,Uid:5320a754065da919d23ab3a0bbbbc52e,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:43.472670 kubelet[2805]: E0913 00:11:43.472602 2805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="800ms" Sep 13 00:11:43.689828 kubelet[2805]: I0913 00:11:43.686457 2805 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-45" Sep 13 00:11:43.689828 kubelet[2805]: E0913 00:11:43.686836 2805 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" Sep 13 00:11:43.805817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672651278.mount: Deactivated successfully. Sep 13 00:11:43.816958 containerd[1973]: time="2025-09-13T00:11:43.816899503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:11:43.817124 containerd[1973]: time="2025-09-13T00:11:43.817033081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:43.818091 containerd[1973]: time="2025-09-13T00:11:43.818057188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:43.819118 containerd[1973]: time="2025-09-13T00:11:43.819061149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:11:43.820742 containerd[1973]: time="2025-09-13T00:11:43.820696329Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:43.822699 containerd[1973]: time="2025-09-13T00:11:43.822307563Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:43.822699 containerd[1973]: time="2025-09-13T00:11:43.822615773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:11:43.825926 containerd[1973]: time="2025-09-13T00:11:43.825884836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:43.828617 containerd[1973]: time="2025-09-13T00:11:43.828472677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.703795ms" Sep 13 00:11:43.830316 containerd[1973]: time="2025-09-13T00:11:43.830189144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.124459ms" Sep 13 00:11:43.837650 containerd[1973]: time="2025-09-13T00:11:43.837470437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.757361ms" Sep 13 00:11:43.970771 kubelet[2805]: E0913 00:11:43.970654 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:11:44.053116 containerd[1973]: time="2025-09-13T00:11:44.046788809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:44.053116 containerd[1973]: time="2025-09-13T00:11:44.046947409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:44.053116 containerd[1973]: time="2025-09-13T00:11:44.047008365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:44.053116 containerd[1973]: time="2025-09-13T00:11:44.047200250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:44.055027 kubelet[2805]: E0913 00:11:44.054962 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-45&limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:11:44.069850 containerd[1973]: time="2025-09-13T00:11:44.068938093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:44.069850 containerd[1973]: time="2025-09-13T00:11:44.069019965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:44.069850 containerd[1973]: time="2025-09-13T00:11:44.069052369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:44.069850 containerd[1973]: time="2025-09-13T00:11:44.069155250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:44.077059 containerd[1973]: time="2025-09-13T00:11:44.076150357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:44.077059 containerd[1973]: time="2025-09-13T00:11:44.076234706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:44.077059 containerd[1973]: time="2025-09-13T00:11:44.076256885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:44.077059 containerd[1973]: time="2025-09-13T00:11:44.076390622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:44.101654 systemd[1]: Started cri-containerd-fcde7232fdc02efe1255fe27cbd768bfcd584a747691cf340ca19aba9e7daa6a.scope - libcontainer container fcde7232fdc02efe1255fe27cbd768bfcd584a747691cf340ca19aba9e7daa6a. Sep 13 00:11:44.111915 systemd[1]: Started cri-containerd-4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0.scope - libcontainer container 4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0. Sep 13 00:11:44.117868 kubelet[2805]: E0913 00:11:44.117723 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:11:44.122991 systemd[1]: Started cri-containerd-aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70.scope - libcontainer container aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70. Sep 13 00:11:44.200014 containerd[1973]: time="2025-09-13T00:11:44.199966679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-45,Uid:846eac714dfffa76d28e49d0e4493bd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0\"" Sep 13 00:11:44.210936 containerd[1973]: time="2025-09-13T00:11:44.210550317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-45,Uid:5d1047bc4c61b5b4f3f84b8344eec872,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcde7232fdc02efe1255fe27cbd768bfcd584a747691cf340ca19aba9e7daa6a\"" Sep 13 00:11:44.223048 containerd[1973]: time="2025-09-13T00:11:44.222930969Z" level=info msg="CreateContainer within sandbox \"4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:11:44.224305 containerd[1973]: time="2025-09-13T00:11:44.224260888Z" level=info msg="CreateContainer within sandbox \"fcde7232fdc02efe1255fe27cbd768bfcd584a747691cf340ca19aba9e7daa6a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:11:44.257690 containerd[1973]: time="2025-09-13T00:11:44.257650203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-45,Uid:5320a754065da919d23ab3a0bbbbc52e,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70\"" Sep 13 00:11:44.258027 containerd[1973]: time="2025-09-13T00:11:44.257898592Z" level=info msg="CreateContainer within sandbox \"4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914\"" Sep 13 00:11:44.258678 containerd[1973]: time="2025-09-13T00:11:44.258623894Z" level=info msg="StartContainer for \"58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914\"" Sep 13 00:11:44.264009 containerd[1973]: time="2025-09-13T00:11:44.263962323Z" level=info msg="CreateContainer within sandbox \"aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:11:44.270686 containerd[1973]: time="2025-09-13T00:11:44.270639910Z" level=info msg="CreateContainer within sandbox \"fcde7232fdc02efe1255fe27cbd768bfcd584a747691cf340ca19aba9e7daa6a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9223130a9712ec289352e08ca19d1a70ca5e907e48f8324e6482447b687b4197\"" Sep 13 00:11:44.271501 containerd[1973]: time="2025-09-13T00:11:44.271461163Z" level=info msg="StartContainer for \"9223130a9712ec289352e08ca19d1a70ca5e907e48f8324e6482447b687b4197\"" Sep 13 00:11:44.273881 kubelet[2805]: E0913 00:11:44.273840 2805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": dial tcp 172.31.31.45:6443: connect: connection refused" interval="1.6s" Sep 13 00:11:44.288216 containerd[1973]: time="2025-09-13T00:11:44.288073157Z" level=info msg="CreateContainer within sandbox \"aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285\"" Sep 13 00:11:44.291266 containerd[1973]: time="2025-09-13T00:11:44.289950216Z" level=info msg="StartContainer for \"2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285\"" Sep 13 00:11:44.306971 systemd[1]: Started cri-containerd-58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914.scope - libcontainer container 58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914. Sep 13 00:11:44.330702 systemd[1]: Started cri-containerd-9223130a9712ec289352e08ca19d1a70ca5e907e48f8324e6482447b687b4197.scope - libcontainer container 9223130a9712ec289352e08ca19d1a70ca5e907e48f8324e6482447b687b4197. Sep 13 00:11:44.356873 systemd[1]: Started cri-containerd-2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285.scope - libcontainer container 2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285. Sep 13 00:11:44.413507 containerd[1973]: time="2025-09-13T00:11:44.413415026Z" level=info msg="StartContainer for \"58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914\" returns successfully" Sep 13 00:11:44.441381 kubelet[2805]: E0913 00:11:44.441172 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:11:44.451902 containerd[1973]: time="2025-09-13T00:11:44.451649755Z" level=info msg="StartContainer for \"9223130a9712ec289352e08ca19d1a70ca5e907e48f8324e6482447b687b4197\" returns successfully" Sep 13 00:11:44.451902 containerd[1973]: time="2025-09-13T00:11:44.451649789Z" level=info msg="StartContainer for \"2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285\" returns successfully" Sep 13 00:11:44.490547 kubelet[2805]: I0913 00:11:44.489327 2805 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-45" Sep 13 00:11:44.490547 kubelet[2805]: E0913 00:11:44.489712 2805 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.45:6443/api/v1/nodes\": dial tcp 172.31.31.45:6443: connect: connection refused" node="ip-172-31-31-45" Sep 13 00:11:44.895934 kubelet[2805]: E0913 00:11:44.895890 2805 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:11:44.927153 kubelet[2805]: E0913 00:11:44.927026 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:44.931751 kubelet[2805]: E0913 00:11:44.931481 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:44.932538 kubelet[2805]: E0913 00:11:44.932510 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:45.757894 kubelet[2805]: E0913 00:11:45.757837 2805 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.45:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:11:45.934490 kubelet[2805]: E0913 00:11:45.933981 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:45.934490 kubelet[2805]: E0913 00:11:45.934363 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:46.094253 kubelet[2805]: I0913 00:11:46.093948 2805 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-45" Sep 13 00:11:47.291124 kubelet[2805]: E0913 00:11:47.291087 2805 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:48.451097 kubelet[2805]: E0913 00:11:48.451010 2805 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-45\" not found" node="ip-172-31-31-45" Sep 13 00:11:48.464361 kubelet[2805]: I0913 00:11:48.462738 2805 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-45" Sep 13 00:11:48.467638 kubelet[2805]: I0913 00:11:48.467605 2805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:48.480448 kubelet[2805]: E0913 00:11:48.480281 2805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:48.480448 kubelet[2805]: I0913 00:11:48.480312 2805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-45" Sep 13 00:11:48.486873 kubelet[2805]: E0913 00:11:48.486693 2805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-45" Sep 13 00:11:48.486873 kubelet[2805]: I0913 00:11:48.486718 2805 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:48.488687 kubelet[2805]: E0913 00:11:48.488647 2805 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-45\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:48.830986 kubelet[2805]: I0913 00:11:48.830948 2805 apiserver.go:52] "Watching apiserver" Sep 13 00:11:48.860893 kubelet[2805]: I0913 00:11:48.860849 2805 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:11:50.684549 systemd[1]: Reloading requested from client PID 3090 ('systemctl') (unit session-7.scope)... Sep 13 00:11:50.684571 systemd[1]: Reloading... Sep 13 00:11:50.775499 zram_generator::config[3126]: No configuration found. Sep 13 00:11:50.920189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:51.024664 systemd[1]: Reloading finished in 339 ms. Sep 13 00:11:51.074763 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:51.090934 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:11:51.091209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:51.091278 systemd[1]: kubelet.service: Consumed 1.567s CPU time, 132.3M memory peak, 0B memory swap peak. Sep 13 00:11:51.098041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:51.735703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:51.747916 (kubelet)[3190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:11:51.820962 kubelet[3190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:51.821471 kubelet[3190]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:11:51.821471 kubelet[3190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:51.821471 kubelet[3190]: I0913 00:11:51.821454 3190 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:11:51.836166 kubelet[3190]: I0913 00:11:51.836095 3190 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:11:51.836166 kubelet[3190]: I0913 00:11:51.836122 3190 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:11:51.836991 kubelet[3190]: I0913 00:11:51.836940 3190 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:11:51.839165 kubelet[3190]: I0913 00:11:51.839147 3190 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:11:51.850464 kubelet[3190]: I0913 00:11:51.849307 3190 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:11:51.862081 kubelet[3190]: E0913 00:11:51.862036 3190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:11:51.862081 kubelet[3190]: I0913 00:11:51.862078 3190 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:11:51.869176 kubelet[3190]: I0913 00:11:51.869142 3190 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:11:51.870737 kubelet[3190]: I0913 00:11:51.870695 3190 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:11:51.870965 kubelet[3190]: I0913 00:11:51.870738 3190 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-45","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:11:51.871105 kubelet[3190]: I0913 00:11:51.870976 3190 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:11:51.871105 kubelet[3190]: I0913 00:11:51.870991 3190 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:11:51.871105 kubelet[3190]: I0913 00:11:51.871053 3190 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:51.871347 kubelet[3190]: I0913 00:11:51.871259 3190 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:11:51.871347 kubelet[3190]: I0913 00:11:51.871278 3190 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:11:51.873366 kubelet[3190]: I0913 00:11:51.872398 3190 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:11:51.873366 kubelet[3190]: I0913 00:11:51.872460 3190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:11:51.875648 kubelet[3190]: I0913 00:11:51.875624 3190 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:11:51.877995 kubelet[3190]: I0913 00:11:51.877813 3190 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:11:51.904583 kubelet[3190]: I0913 00:11:51.904421 3190 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:11:51.904823 kubelet[3190]: I0913 00:11:51.904623 3190 server.go:1289] "Started kubelet" Sep 13 00:11:51.907407 kubelet[3190]: I0913 00:11:51.907242 3190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:11:51.908250 kubelet[3190]: I0913 00:11:51.908132 3190 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:11:51.908437 kubelet[3190]: I0913 00:11:51.908369 3190 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:11:51.909537 kubelet[3190]: I0913 00:11:51.909251 3190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:11:51.910260 kubelet[3190]: I0913 00:11:51.910246 3190 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:11:51.920279 kubelet[3190]: I0913 00:11:51.920243 3190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:11:51.925346 kubelet[3190]: I0913 00:11:51.925179 3190 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:11:51.925611 kubelet[3190]: I0913 00:11:51.925357 3190 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:11:51.925611 kubelet[3190]: I0913 00:11:51.925511 3190 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:11:51.928954 kubelet[3190]: I0913 00:11:51.928933 3190 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:11:51.931637 kubelet[3190]: I0913 00:11:51.931584 3190 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:11:51.933006 kubelet[3190]: E0913 00:11:51.932979 3190 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:11:51.937853 kubelet[3190]: I0913 00:11:51.937813 3190 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:11:51.949862 kubelet[3190]: I0913 00:11:51.949400 3190 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:11:51.963210 kubelet[3190]: I0913 00:11:51.963169 3190 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:11:51.963210 kubelet[3190]: I0913 00:11:51.963196 3190 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:11:51.963633 kubelet[3190]: I0913 00:11:51.963221 3190 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:11:51.963633 kubelet[3190]: I0913 00:11:51.963229 3190 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:11:51.963633 kubelet[3190]: E0913 00:11:51.963278 3190 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:11:52.006703 kubelet[3190]: I0913 00:11:52.006605 3190 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:11:52.008587 kubelet[3190]: I0913 00:11:52.008548 3190 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:11:52.008751 kubelet[3190]: I0913 00:11:52.008620 3190 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:52.008920 kubelet[3190]: I0913 00:11:52.008901 3190 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:11:52.008978 kubelet[3190]: I0913 00:11:52.008922 3190 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:11:52.008978 kubelet[3190]: I0913 00:11:52.008951 3190 policy_none.go:49] "None policy: Start" Sep 13 00:11:52.009062 kubelet[3190]: I0913 00:11:52.008981 3190 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:11:52.009062 kubelet[3190]: I0913 00:11:52.008997 3190 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:11:52.009176 kubelet[3190]: I0913 00:11:52.009164 3190 state_mem.go:75] "Updated machine memory state" Sep 13 00:11:52.022302 kubelet[3190]: E0913 00:11:52.022265 3190 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:11:52.022538 kubelet[3190]: I0913 00:11:52.022502 3190 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:11:52.022708 kubelet[3190]: I0913 00:11:52.022526 3190 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:11:52.027750 kubelet[3190]: I0913 00:11:52.027720 3190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:11:52.029191 kubelet[3190]: E0913 00:11:52.029164 3190 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:11:52.066630 kubelet[3190]: I0913 00:11:52.065901 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-45" Sep 13 00:11:52.066630 kubelet[3190]: I0913 00:11:52.066415 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:52.068902 kubelet[3190]: I0913 00:11:52.068176 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:52.130241 kubelet[3190]: I0913 00:11:52.130152 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d1047bc4c61b5b4f3f84b8344eec872-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"5d1047bc4c61b5b4f3f84b8344eec872\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:52.130377 kubelet[3190]: I0913 00:11:52.130250 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:52.130377 kubelet[3190]: I0913 00:11:52.130272 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:52.130377 kubelet[3190]: I0913 00:11:52.130289 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:52.130377 kubelet[3190]: I0913 00:11:52.130305 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:52.130377 kubelet[3190]: I0913 00:11:52.130323 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d1047bc4c61b5b4f3f84b8344eec872-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"5d1047bc4c61b5b4f3f84b8344eec872\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:52.130602 kubelet[3190]: I0913 00:11:52.130346 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/846eac714dfffa76d28e49d0e4493bd9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-45\" (UID: \"846eac714dfffa76d28e49d0e4493bd9\") " pod="kube-system/kube-controller-manager-ip-172-31-31-45" Sep 13 00:11:52.130602 kubelet[3190]: I0913 00:11:52.130363 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5320a754065da919d23ab3a0bbbbc52e-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-45\" (UID: \"5320a754065da919d23ab3a0bbbbc52e\") " pod="kube-system/kube-scheduler-ip-172-31-31-45" Sep 13 00:11:52.130602 kubelet[3190]: I0913 00:11:52.130381 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d1047bc4c61b5b4f3f84b8344eec872-ca-certs\") pod \"kube-apiserver-ip-172-31-31-45\" (UID: \"5d1047bc4c61b5b4f3f84b8344eec872\") " pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:52.134393 kubelet[3190]: I0913 00:11:52.134363 3190 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-45" Sep 13 00:11:52.166284 kubelet[3190]: I0913 00:11:52.166245 3190 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-45" Sep 13 00:11:52.166441 kubelet[3190]: I0913 00:11:52.166337 3190 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-45" Sep 13 00:11:52.874919 kubelet[3190]: I0913 00:11:52.874732 3190 apiserver.go:52] "Watching apiserver" Sep 13 00:11:52.926473 kubelet[3190]: I0913 00:11:52.926373 3190 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:11:52.987609 kubelet[3190]: I0913 00:11:52.987565 3190 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:53.003228 kubelet[3190]: E0913 00:11:53.003165 3190 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-45\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-45" Sep 13 00:11:53.040366 kubelet[3190]: I0913 00:11:53.039412 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-45" podStartSLOduration=1.039389866 podStartE2EDuration="1.039389866s" podCreationTimestamp="2025-09-13 00:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:53.023026775 +0000 UTC m=+1.263759317" watchObservedRunningTime="2025-09-13 00:11:53.039389866 +0000 UTC m=+1.280122406" Sep 13 00:11:53.059041 kubelet[3190]: I0913 00:11:53.058875 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-45" podStartSLOduration=1.058855173 podStartE2EDuration="1.058855173s" podCreationTimestamp="2025-09-13 00:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:53.040237188 +0000 UTC m=+1.280969730" watchObservedRunningTime="2025-09-13 00:11:53.058855173 +0000 UTC m=+1.299587716" Sep 13 00:11:53.079266 kubelet[3190]: I0913 00:11:53.079111 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-45" podStartSLOduration=1.079052128 podStartE2EDuration="1.079052128s" podCreationTimestamp="2025-09-13 00:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:53.059875433 +0000 UTC m=+1.300607975" watchObservedRunningTime="2025-09-13 00:11:53.079052128 +0000 UTC m=+1.319784661" Sep 13 00:11:56.262018 kubelet[3190]: I0913 00:11:56.261984 3190 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:11:56.263633 containerd[1973]: time="2025-09-13T00:11:56.262652384Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:11:56.263916 kubelet[3190]: I0913 00:11:56.262850 3190 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:11:57.137315 systemd[1]: Created slice kubepods-besteffort-pod96d17b1f_f50d_4204_a3f7_96852e867c5f.slice - libcontainer container kubepods-besteffort-pod96d17b1f_f50d_4204_a3f7_96852e867c5f.slice. Sep 13 00:11:57.165550 kubelet[3190]: I0913 00:11:57.165400 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96d17b1f-f50d-4204-a3f7-96852e867c5f-kube-proxy\") pod \"kube-proxy-qbqpj\" (UID: \"96d17b1f-f50d-4204-a3f7-96852e867c5f\") " pod="kube-system/kube-proxy-qbqpj" Sep 13 00:11:57.165550 kubelet[3190]: I0913 00:11:57.165456 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ssv9\" (UniqueName: \"kubernetes.io/projected/96d17b1f-f50d-4204-a3f7-96852e867c5f-kube-api-access-8ssv9\") pod \"kube-proxy-qbqpj\" (UID: \"96d17b1f-f50d-4204-a3f7-96852e867c5f\") " pod="kube-system/kube-proxy-qbqpj" Sep 13 00:11:57.165550 kubelet[3190]: I0913 00:11:57.165484 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96d17b1f-f50d-4204-a3f7-96852e867c5f-xtables-lock\") pod \"kube-proxy-qbqpj\" (UID: \"96d17b1f-f50d-4204-a3f7-96852e867c5f\") " pod="kube-system/kube-proxy-qbqpj" Sep 13 00:11:57.165550 kubelet[3190]: I0913 00:11:57.165500 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96d17b1f-f50d-4204-a3f7-96852e867c5f-lib-modules\") pod \"kube-proxy-qbqpj\" (UID: \"96d17b1f-f50d-4204-a3f7-96852e867c5f\") " pod="kube-system/kube-proxy-qbqpj" Sep 13 00:11:57.198882 update_engine[1959]: I20250913 00:11:57.198812 1959 update_attempter.cc:509] Updating boot flags... Sep 13 00:11:57.308047 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3255) Sep 13 00:11:57.450064 containerd[1973]: time="2025-09-13T00:11:57.449972959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbqpj,Uid:96d17b1f-f50d-4204-a3f7-96852e867c5f,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:57.476950 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3256) Sep 13 00:11:57.522924 containerd[1973]: time="2025-09-13T00:11:57.522835895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:57.522924 containerd[1973]: time="2025-09-13T00:11:57.522888406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:57.523730 containerd[1973]: time="2025-09-13T00:11:57.523391838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:57.523730 containerd[1973]: time="2025-09-13T00:11:57.523658384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:57.525454 systemd[1]: Created slice kubepods-besteffort-pod30670457_eea1_4cdc_b2df_1da38b856288.slice - libcontainer container kubepods-besteffort-pod30670457_eea1_4cdc_b2df_1da38b856288.slice. Sep 13 00:11:57.566621 systemd[1]: Started cri-containerd-168c849c55849bf86907c1b078d05c5b7a9a21ea3f6d6fecb17e2bc8e401aa84.scope - libcontainer container 168c849c55849bf86907c1b078d05c5b7a9a21ea3f6d6fecb17e2bc8e401aa84. Sep 13 00:11:57.569893 kubelet[3190]: I0913 00:11:57.569805 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzhx\" (UniqueName: \"kubernetes.io/projected/30670457-eea1-4cdc-b2df-1da38b856288-kube-api-access-2xzhx\") pod \"tigera-operator-755d956888-qwl9v\" (UID: \"30670457-eea1-4cdc-b2df-1da38b856288\") " pod="tigera-operator/tigera-operator-755d956888-qwl9v" Sep 13 00:11:57.569893 kubelet[3190]: I0913 00:11:57.569852 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30670457-eea1-4cdc-b2df-1da38b856288-var-lib-calico\") pod \"tigera-operator-755d956888-qwl9v\" (UID: \"30670457-eea1-4cdc-b2df-1da38b856288\") " pod="tigera-operator/tigera-operator-755d956888-qwl9v" Sep 13 00:11:57.652194 containerd[1973]: time="2025-09-13T00:11:57.652158613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbqpj,Uid:96d17b1f-f50d-4204-a3f7-96852e867c5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"168c849c55849bf86907c1b078d05c5b7a9a21ea3f6d6fecb17e2bc8e401aa84\"" Sep 13 00:11:57.672801 containerd[1973]: time="2025-09-13T00:11:57.672666387Z" level=info msg="CreateContainer within sandbox \"168c849c55849bf86907c1b078d05c5b7a9a21ea3f6d6fecb17e2bc8e401aa84\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:11:57.706400 containerd[1973]: time="2025-09-13T00:11:57.706288596Z" level=info msg="CreateContainer within sandbox \"168c849c55849bf86907c1b078d05c5b7a9a21ea3f6d6fecb17e2bc8e401aa84\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"abd30506693699df6a189f6f517c2be2de40b8e5301fc7c0c2d1eb3fc6171a23\"" Sep 13 00:11:57.708801 containerd[1973]: time="2025-09-13T00:11:57.708738936Z" level=info msg="StartContainer for \"abd30506693699df6a189f6f517c2be2de40b8e5301fc7c0c2d1eb3fc6171a23\"" Sep 13 00:11:57.736652 systemd[1]: Started cri-containerd-abd30506693699df6a189f6f517c2be2de40b8e5301fc7c0c2d1eb3fc6171a23.scope - libcontainer container abd30506693699df6a189f6f517c2be2de40b8e5301fc7c0c2d1eb3fc6171a23. Sep 13 00:11:57.771786 containerd[1973]: time="2025-09-13T00:11:57.771671930Z" level=info msg="StartContainer for \"abd30506693699df6a189f6f517c2be2de40b8e5301fc7c0c2d1eb3fc6171a23\" returns successfully" Sep 13 00:11:57.837288 containerd[1973]: time="2025-09-13T00:11:57.837150904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qwl9v,Uid:30670457-eea1-4cdc-b2df-1da38b856288,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:11:57.864929 containerd[1973]: time="2025-09-13T00:11:57.864818028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:57.865083 containerd[1973]: time="2025-09-13T00:11:57.865022131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:57.865247 containerd[1973]: time="2025-09-13T00:11:57.865091703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:57.866159 containerd[1973]: time="2025-09-13T00:11:57.866108116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:57.891690 systemd[1]: Started cri-containerd-f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005.scope - libcontainer container f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005. Sep 13 00:11:57.939489 containerd[1973]: time="2025-09-13T00:11:57.939216486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qwl9v,Uid:30670457-eea1-4cdc-b2df-1da38b856288,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005\"" Sep 13 00:11:57.941605 containerd[1973]: time="2025-09-13T00:11:57.941558963Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:11:59.366342 kubelet[3190]: I0913 00:11:59.365962 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbqpj" podStartSLOduration=2.365939476 podStartE2EDuration="2.365939476s" podCreationTimestamp="2025-09-13 00:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:58.016394093 +0000 UTC m=+6.257126635" watchObservedRunningTime="2025-09-13 00:11:59.365939476 +0000 UTC m=+7.606672020" Sep 13 00:11:59.404101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108854686.mount: Deactivated successfully. Sep 13 00:12:00.446791 containerd[1973]: time="2025-09-13T00:12:00.446737690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:00.448104 containerd[1973]: time="2025-09-13T00:12:00.448038562Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:12:00.449576 containerd[1973]: time="2025-09-13T00:12:00.449487899Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:00.452494 containerd[1973]: time="2025-09-13T00:12:00.452410622Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:00.468298 containerd[1973]: time="2025-09-13T00:12:00.468158763Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.526556741s" Sep 13 00:12:00.468298 containerd[1973]: time="2025-09-13T00:12:00.468201260Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:12:00.472530 containerd[1973]: time="2025-09-13T00:12:00.472490810Z" level=info msg="CreateContainer within sandbox \"f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:12:00.488078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624746788.mount: Deactivated successfully. Sep 13 00:12:00.490773 containerd[1973]: time="2025-09-13T00:12:00.490729340Z" level=info msg="CreateContainer within sandbox \"f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7\"" Sep 13 00:12:00.492692 containerd[1973]: time="2025-09-13T00:12:00.491689537Z" level=info msg="StartContainer for \"84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7\"" Sep 13 00:12:00.531702 systemd[1]: Started cri-containerd-84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7.scope - libcontainer container 84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7. Sep 13 00:12:00.562329 containerd[1973]: time="2025-09-13T00:12:00.562200409Z" level=info msg="StartContainer for \"84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7\" returns successfully" Sep 13 00:12:02.568699 kubelet[3190]: I0913 00:12:02.568487 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-qwl9v" podStartSLOduration=3.040200425 podStartE2EDuration="5.568465275s" podCreationTimestamp="2025-09-13 00:11:57 +0000 UTC" firstStartedPulling="2025-09-13 00:11:57.941133138 +0000 UTC m=+6.181865669" lastFinishedPulling="2025-09-13 00:12:00.469398001 +0000 UTC m=+8.710130519" observedRunningTime="2025-09-13 00:12:01.034509206 +0000 UTC m=+9.275241746" watchObservedRunningTime="2025-09-13 00:12:02.568465275 +0000 UTC m=+10.809197815" Sep 13 00:12:06.767443 sudo[2299]: pam_unix(sudo:session): session closed for user root Sep 13 00:12:06.791167 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:06.797847 systemd-logind[1958]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:12:06.799802 systemd[1]: sshd@6-172.31.31.45:22-139.178.89.65:48462.service: Deactivated successfully. Sep 13 00:12:06.808437 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:12:06.811251 systemd[1]: session-7.scope: Consumed 6.718s CPU time, 145.8M memory peak, 0B memory swap peak. Sep 13 00:12:06.817382 systemd-logind[1958]: Removed session 7. Sep 13 00:12:11.770280 systemd[1]: Created slice kubepods-besteffort-pod82536023_4601_435b_b715_2229ec871c92.slice - libcontainer container kubepods-besteffort-pod82536023_4601_435b_b715_2229ec871c92.slice. Sep 13 00:12:11.771532 kubelet[3190]: I0913 00:12:11.770917 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/82536023-4601-435b-b715-2229ec871c92-typha-certs\") pod \"calico-typha-79df87f4bc-bfkvb\" (UID: \"82536023-4601-435b-b715-2229ec871c92\") " pod="calico-system/calico-typha-79df87f4bc-bfkvb" Sep 13 00:12:11.771532 kubelet[3190]: I0913 00:12:11.770961 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dnm\" (UniqueName: \"kubernetes.io/projected/82536023-4601-435b-b715-2229ec871c92-kube-api-access-n8dnm\") pod \"calico-typha-79df87f4bc-bfkvb\" (UID: \"82536023-4601-435b-b715-2229ec871c92\") " pod="calico-system/calico-typha-79df87f4bc-bfkvb" Sep 13 00:12:11.771532 kubelet[3190]: I0913 00:12:11.770997 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82536023-4601-435b-b715-2229ec871c92-tigera-ca-bundle\") pod \"calico-typha-79df87f4bc-bfkvb\" (UID: \"82536023-4601-435b-b715-2229ec871c92\") " pod="calico-system/calico-typha-79df87f4bc-bfkvb" Sep 13 00:12:12.051472 systemd[1]: Created slice kubepods-besteffort-pod17bee075_6203_4712_be66_6c756c339693.slice - libcontainer container kubepods-besteffort-pod17bee075_6203_4712_be66_6c756c339693.slice. Sep 13 00:12:12.073329 kubelet[3190]: I0913 00:12:12.073226 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-var-run-calico\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073329 kubelet[3190]: I0913 00:12:12.073335 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-cni-log-dir\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073512 kubelet[3190]: I0913 00:12:12.073357 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-cni-net-dir\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073512 kubelet[3190]: I0913 00:12:12.073373 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-flexvol-driver-host\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073512 kubelet[3190]: I0913 00:12:12.073414 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-policysync\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073512 kubelet[3190]: I0913 00:12:12.073458 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-lib-modules\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073512 kubelet[3190]: I0913 00:12:12.073480 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-var-lib-calico\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073650 kubelet[3190]: I0913 00:12:12.073501 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-xtables-lock\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073650 kubelet[3190]: I0913 00:12:12.073517 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17bee075-6203-4712-be66-6c756c339693-node-certs\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073650 kubelet[3190]: I0913 00:12:12.073534 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17bee075-6203-4712-be66-6c756c339693-cni-bin-dir\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073650 kubelet[3190]: I0913 00:12:12.073550 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17bee075-6203-4712-be66-6c756c339693-tigera-ca-bundle\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.073650 kubelet[3190]: I0913 00:12:12.073564 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfn29\" (UniqueName: \"kubernetes.io/projected/17bee075-6203-4712-be66-6c756c339693-kube-api-access-jfn29\") pod \"calico-node-gfd9d\" (UID: \"17bee075-6203-4712-be66-6c756c339693\") " pod="calico-system/calico-node-gfd9d" Sep 13 00:12:12.085310 containerd[1973]: time="2025-09-13T00:12:12.084688446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79df87f4bc-bfkvb,Uid:82536023-4601-435b-b715-2229ec871c92,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:12.129557 containerd[1973]: time="2025-09-13T00:12:12.127543600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:12.129557 containerd[1973]: time="2025-09-13T00:12:12.127605755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:12.129557 containerd[1973]: time="2025-09-13T00:12:12.127620095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:12.129557 containerd[1973]: time="2025-09-13T00:12:12.127708244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:12.188302 kubelet[3190]: E0913 00:12:12.187965 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.188302 kubelet[3190]: W0913 00:12:12.187991 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.188302 kubelet[3190]: E0913 00:12:12.188261 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.192916 kubelet[3190]: E0913 00:12:12.192707 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.192916 kubelet[3190]: W0913 00:12:12.192741 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.192916 kubelet[3190]: E0913 00:12:12.192761 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.193659 kubelet[3190]: E0913 00:12:12.193017 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.193659 kubelet[3190]: W0913 00:12:12.193372 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.193659 kubelet[3190]: E0913 00:12:12.193389 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.194194 kubelet[3190]: E0913 00:12:12.193956 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.194194 kubelet[3190]: W0913 00:12:12.193972 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.194194 kubelet[3190]: E0913 00:12:12.193996 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.195830 kubelet[3190]: E0913 00:12:12.195798 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.195830 kubelet[3190]: W0913 00:12:12.195819 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.195965 kubelet[3190]: E0913 00:12:12.195836 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.196718 kubelet[3190]: E0913 00:12:12.196577 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.196798 kubelet[3190]: W0913 00:12:12.196718 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.196798 kubelet[3190]: E0913 00:12:12.196737 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.198201 kubelet[3190]: E0913 00:12:12.197944 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.198201 kubelet[3190]: W0913 00:12:12.198201 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.198392 kubelet[3190]: E0913 00:12:12.198222 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.199757 kubelet[3190]: E0913 00:12:12.199734 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.199757 kubelet[3190]: W0913 00:12:12.199751 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.199896 kubelet[3190]: E0913 00:12:12.199769 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.200483 kubelet[3190]: E0913 00:12:12.200461 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.201381 kubelet[3190]: W0913 00:12:12.201269 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.201487 kubelet[3190]: E0913 00:12:12.201367 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.205412 kubelet[3190]: E0913 00:12:12.205384 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.205412 kubelet[3190]: W0913 00:12:12.205411 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.208400 kubelet[3190]: E0913 00:12:12.205461 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.219604 kubelet[3190]: E0913 00:12:12.219572 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.219604 kubelet[3190]: W0913 00:12:12.219597 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.219780 kubelet[3190]: E0913 00:12:12.219622 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.227303 systemd[1]: Started cri-containerd-d8895b30a8e818cf5c470e03c35665799e53ed3b6616254868914a514b81d1c6.scope - libcontainer container d8895b30a8e818cf5c470e03c35665799e53ed3b6616254868914a514b81d1c6. Sep 13 00:12:12.351407 kubelet[3190]: E0913 00:12:12.350881 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:12.356613 containerd[1973]: time="2025-09-13T00:12:12.356547258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gfd9d,Uid:17bee075-6203-4712-be66-6c756c339693,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:12.374052 kubelet[3190]: E0913 00:12:12.373763 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.374052 kubelet[3190]: W0913 00:12:12.373791 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.374052 kubelet[3190]: E0913 00:12:12.373817 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.374756 kubelet[3190]: E0913 00:12:12.374518 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.374756 kubelet[3190]: W0913 00:12:12.374537 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.374756 kubelet[3190]: E0913 00:12:12.374557 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.375408 kubelet[3190]: E0913 00:12:12.375145 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.375408 kubelet[3190]: W0913 00:12:12.375175 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.375408 kubelet[3190]: E0913 00:12:12.375194 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.375944 kubelet[3190]: E0913 00:12:12.375766 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.375944 kubelet[3190]: W0913 00:12:12.375780 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.375944 kubelet[3190]: E0913 00:12:12.375813 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.376386 kubelet[3190]: E0913 00:12:12.376262 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.376386 kubelet[3190]: W0913 00:12:12.376276 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.376386 kubelet[3190]: E0913 00:12:12.376289 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.378867 kubelet[3190]: E0913 00:12:12.378701 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.378867 kubelet[3190]: W0913 00:12:12.378722 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.378867 kubelet[3190]: E0913 00:12:12.378750 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.380602 kubelet[3190]: E0913 00:12:12.379236 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.380602 kubelet[3190]: W0913 00:12:12.379250 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.380602 kubelet[3190]: E0913 00:12:12.379267 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.380602 kubelet[3190]: E0913 00:12:12.380187 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.380602 kubelet[3190]: W0913 00:12:12.380305 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.380602 kubelet[3190]: E0913 00:12:12.380324 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.382447 kubelet[3190]: E0913 00:12:12.381638 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.382447 kubelet[3190]: W0913 00:12:12.381671 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.382447 kubelet[3190]: E0913 00:12:12.381687 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.382447 kubelet[3190]: E0913 00:12:12.382294 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.382447 kubelet[3190]: W0913 00:12:12.382307 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.382447 kubelet[3190]: E0913 00:12:12.382345 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.384219 kubelet[3190]: E0913 00:12:12.383493 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.384219 kubelet[3190]: W0913 00:12:12.383520 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.384219 kubelet[3190]: E0913 00:12:12.383538 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.384219 kubelet[3190]: E0913 00:12:12.383877 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.384219 kubelet[3190]: W0913 00:12:12.383888 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.384219 kubelet[3190]: E0913 00:12:12.383903 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.385280 kubelet[3190]: E0913 00:12:12.385244 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.385280 kubelet[3190]: W0913 00:12:12.385279 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.385407 kubelet[3190]: E0913 00:12:12.385294 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.386479 kubelet[3190]: E0913 00:12:12.386460 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.386479 kubelet[3190]: W0913 00:12:12.386480 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.386606 kubelet[3190]: E0913 00:12:12.386495 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.386815 kubelet[3190]: E0913 00:12:12.386800 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.386868 kubelet[3190]: W0913 00:12:12.386816 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.386868 kubelet[3190]: E0913 00:12:12.386831 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.387671 kubelet[3190]: E0913 00:12:12.387651 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.387671 kubelet[3190]: W0913 00:12:12.387669 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.387791 kubelet[3190]: E0913 00:12:12.387683 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.388969 kubelet[3190]: E0913 00:12:12.388678 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.388969 kubelet[3190]: W0913 00:12:12.388694 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.388969 kubelet[3190]: E0913 00:12:12.388711 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.391119 kubelet[3190]: E0913 00:12:12.390925 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.391119 kubelet[3190]: W0913 00:12:12.390948 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.391119 kubelet[3190]: E0913 00:12:12.390964 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.391394 kubelet[3190]: E0913 00:12:12.391219 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.391394 kubelet[3190]: W0913 00:12:12.391230 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.391394 kubelet[3190]: E0913 00:12:12.391243 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.392764 kubelet[3190]: E0913 00:12:12.391952 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.392764 kubelet[3190]: W0913 00:12:12.391969 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.392764 kubelet[3190]: E0913 00:12:12.391985 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.393321 kubelet[3190]: E0913 00:12:12.393217 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.393321 kubelet[3190]: W0913 00:12:12.393237 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.393321 kubelet[3190]: E0913 00:12:12.393252 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.393321 kubelet[3190]: I0913 00:12:12.393301 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7bbf7a35-8d3f-406b-8fc5-674202715e39-varrun\") pod \"csi-node-driver-r6h8z\" (UID: \"7bbf7a35-8d3f-406b-8fc5-674202715e39\") " pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:12.396792 kubelet[3190]: E0913 00:12:12.396551 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.396792 kubelet[3190]: W0913 00:12:12.396573 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.396792 kubelet[3190]: E0913 00:12:12.396598 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.397019 kubelet[3190]: E0913 00:12:12.396901 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.397019 kubelet[3190]: W0913 00:12:12.396913 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.397019 kubelet[3190]: E0913 00:12:12.396928 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.398089 kubelet[3190]: E0913 00:12:12.397197 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.398089 kubelet[3190]: W0913 00:12:12.397209 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.398089 kubelet[3190]: E0913 00:12:12.397222 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.398089 kubelet[3190]: I0913 00:12:12.397255 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7bbf7a35-8d3f-406b-8fc5-674202715e39-registration-dir\") pod \"csi-node-driver-r6h8z\" (UID: \"7bbf7a35-8d3f-406b-8fc5-674202715e39\") " pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:12.398089 kubelet[3190]: E0913 00:12:12.398045 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.398089 kubelet[3190]: W0913 00:12:12.398062 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.398089 kubelet[3190]: E0913 00:12:12.398078 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.398417 kubelet[3190]: I0913 00:12:12.398240 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7h9\" (UniqueName: \"kubernetes.io/projected/7bbf7a35-8d3f-406b-8fc5-674202715e39-kube-api-access-4x7h9\") pod \"csi-node-driver-r6h8z\" (UID: \"7bbf7a35-8d3f-406b-8fc5-674202715e39\") " pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:12.402714 kubelet[3190]: E0913 00:12:12.402184 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.402714 kubelet[3190]: W0913 00:12:12.402209 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.402714 kubelet[3190]: E0913 00:12:12.402232 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.403689 kubelet[3190]: E0913 00:12:12.403515 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.403689 kubelet[3190]: W0913 00:12:12.403534 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.403689 kubelet[3190]: E0913 00:12:12.403555 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.405420 kubelet[3190]: E0913 00:12:12.405195 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.405420 kubelet[3190]: W0913 00:12:12.405212 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.405420 kubelet[3190]: E0913 00:12:12.405231 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.405420 kubelet[3190]: I0913 00:12:12.405270 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7bbf7a35-8d3f-406b-8fc5-674202715e39-socket-dir\") pod \"csi-node-driver-r6h8z\" (UID: \"7bbf7a35-8d3f-406b-8fc5-674202715e39\") " pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:12.406231 kubelet[3190]: E0913 00:12:12.406210 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.406367 kubelet[3190]: W0913 00:12:12.406232 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.406367 kubelet[3190]: E0913 00:12:12.406253 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.406818 kubelet[3190]: I0913 00:12:12.406775 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bbf7a35-8d3f-406b-8fc5-674202715e39-kubelet-dir\") pod \"csi-node-driver-r6h8z\" (UID: \"7bbf7a35-8d3f-406b-8fc5-674202715e39\") " pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:12.407692 kubelet[3190]: E0913 00:12:12.407411 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.407692 kubelet[3190]: W0913 00:12:12.407487 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.407692 kubelet[3190]: E0913 00:12:12.407508 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.408670 kubelet[3190]: E0913 00:12:12.408649 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.408670 kubelet[3190]: W0913 00:12:12.408670 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.408871 kubelet[3190]: E0913 00:12:12.408687 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.409803 kubelet[3190]: E0913 00:12:12.409542 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.409803 kubelet[3190]: W0913 00:12:12.409560 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.409803 kubelet[3190]: E0913 00:12:12.409575 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.410374 kubelet[3190]: E0913 00:12:12.410044 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.410374 kubelet[3190]: W0913 00:12:12.410077 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.410374 kubelet[3190]: E0913 00:12:12.410094 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.410718 kubelet[3190]: E0913 00:12:12.410692 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.410718 kubelet[3190]: W0913 00:12:12.410704 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.410801 kubelet[3190]: E0913 00:12:12.410717 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.411518 kubelet[3190]: E0913 00:12:12.411253 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.411518 kubelet[3190]: W0913 00:12:12.411266 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.411518 kubelet[3190]: E0913 00:12:12.411345 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.415857 containerd[1973]: time="2025-09-13T00:12:12.413619036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:12.415857 containerd[1973]: time="2025-09-13T00:12:12.413684560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:12.415857 containerd[1973]: time="2025-09-13T00:12:12.413702013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:12.415857 containerd[1973]: time="2025-09-13T00:12:12.413806937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:12.462722 systemd[1]: Started cri-containerd-b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe.scope - libcontainer container b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe. Sep 13 00:12:12.472303 containerd[1973]: time="2025-09-13T00:12:12.472164514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79df87f4bc-bfkvb,Uid:82536023-4601-435b-b715-2229ec871c92,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8895b30a8e818cf5c470e03c35665799e53ed3b6616254868914a514b81d1c6\"" Sep 13 00:12:12.476399 containerd[1973]: time="2025-09-13T00:12:12.476368424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:12:12.509091 kubelet[3190]: E0913 00:12:12.509062 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.509289 kubelet[3190]: W0913 00:12:12.509269 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.509485 kubelet[3190]: E0913 00:12:12.509469 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.510467 kubelet[3190]: E0913 00:12:12.510138 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.510467 kubelet[3190]: W0913 00:12:12.510156 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.510467 kubelet[3190]: E0913 00:12:12.510175 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.514477 kubelet[3190]: E0913 00:12:12.511941 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.514884 kubelet[3190]: W0913 00:12:12.514862 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.515331 kubelet[3190]: E0913 00:12:12.515031 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.515611 kubelet[3190]: E0913 00:12:12.515594 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.516820 kubelet[3190]: W0913 00:12:12.516046 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.516820 kubelet[3190]: E0913 00:12:12.516075 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.523461 kubelet[3190]: E0913 00:12:12.521507 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.523461 kubelet[3190]: W0913 00:12:12.521544 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.523461 kubelet[3190]: E0913 00:12:12.521570 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.523461 kubelet[3190]: E0913 00:12:12.522649 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.523461 kubelet[3190]: W0913 00:12:12.522668 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.523461 kubelet[3190]: E0913 00:12:12.522690 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.527970 kubelet[3190]: E0913 00:12:12.526680 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.527970 kubelet[3190]: W0913 00:12:12.526833 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.527970 kubelet[3190]: E0913 00:12:12.526958 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.529646 kubelet[3190]: E0913 00:12:12.529623 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.529840 kubelet[3190]: W0913 00:12:12.529819 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.529974 kubelet[3190]: E0913 00:12:12.529960 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.534290 kubelet[3190]: E0913 00:12:12.533776 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.534290 kubelet[3190]: W0913 00:12:12.533799 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.534290 kubelet[3190]: E0913 00:12:12.533823 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.534813 kubelet[3190]: E0913 00:12:12.534790 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.535313 kubelet[3190]: W0913 00:12:12.535155 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.535313 kubelet[3190]: E0913 00:12:12.535184 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.539179 kubelet[3190]: E0913 00:12:12.538591 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.539179 kubelet[3190]: W0913 00:12:12.539033 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.539179 kubelet[3190]: E0913 00:12:12.539061 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.540682 kubelet[3190]: E0913 00:12:12.540659 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.540682 kubelet[3190]: W0913 00:12:12.540682 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.542550 kubelet[3190]: E0913 00:12:12.540704 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.543148 kubelet[3190]: E0913 00:12:12.542934 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.543148 kubelet[3190]: W0913 00:12:12.542953 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.543148 kubelet[3190]: E0913 00:12:12.543073 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.544115 kubelet[3190]: E0913 00:12:12.544099 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.545391 kubelet[3190]: W0913 00:12:12.545276 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.545624 kubelet[3190]: E0913 00:12:12.545311 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.547817 kubelet[3190]: E0913 00:12:12.547020 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.547817 kubelet[3190]: W0913 00:12:12.547078 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.547817 kubelet[3190]: E0913 00:12:12.547099 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.547817 kubelet[3190]: E0913 00:12:12.547782 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.547817 kubelet[3190]: W0913 00:12:12.547795 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.548698 kubelet[3190]: E0913 00:12:12.548475 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.549044 kubelet[3190]: E0913 00:12:12.548914 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.549044 kubelet[3190]: W0913 00:12:12.548979 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.549044 kubelet[3190]: E0913 00:12:12.548996 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.549907 kubelet[3190]: E0913 00:12:12.549891 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.549999 kubelet[3190]: W0913 00:12:12.549907 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.549999 kubelet[3190]: E0913 00:12:12.549922 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.550408 kubelet[3190]: E0913 00:12:12.550381 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.550408 kubelet[3190]: W0913 00:12:12.550401 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.550408 kubelet[3190]: E0913 00:12:12.550416 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.551300 kubelet[3190]: E0913 00:12:12.550909 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.551300 kubelet[3190]: W0913 00:12:12.550920 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.551300 kubelet[3190]: E0913 00:12:12.550936 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.551722 kubelet[3190]: E0913 00:12:12.551397 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.551722 kubelet[3190]: W0913 00:12:12.551408 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.551722 kubelet[3190]: E0913 00:12:12.551455 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.552263 kubelet[3190]: E0913 00:12:12.552069 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.552263 kubelet[3190]: W0913 00:12:12.552087 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.552263 kubelet[3190]: E0913 00:12:12.552101 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.554667 kubelet[3190]: E0913 00:12:12.554637 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.554667 kubelet[3190]: W0913 00:12:12.554658 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.555070 kubelet[3190]: E0913 00:12:12.554676 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.555070 kubelet[3190]: E0913 00:12:12.554973 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.555070 kubelet[3190]: W0913 00:12:12.554985 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.555070 kubelet[3190]: E0913 00:12:12.554999 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.555277 kubelet[3190]: E0913 00:12:12.555257 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.555277 kubelet[3190]: W0913 00:12:12.555268 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.555356 kubelet[3190]: E0913 00:12:12.555281 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.600107 kubelet[3190]: E0913 00:12:12.599968 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:12.600107 kubelet[3190]: W0913 00:12:12.599994 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:12.600107 kubelet[3190]: E0913 00:12:12.600042 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:12.635071 containerd[1973]: time="2025-09-13T00:12:12.634793824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gfd9d,Uid:17bee075-6203-4712-be66-6c756c339693,Namespace:calico-system,Attempt:0,} returns sandbox id \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\"" Sep 13 00:12:13.780436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347662635.mount: Deactivated successfully. Sep 13 00:12:13.964983 kubelet[3190]: E0913 00:12:13.963972 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:15.335718 containerd[1973]: time="2025-09-13T00:12:15.335659796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:15.337637 containerd[1973]: time="2025-09-13T00:12:15.337447938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:12:15.340733 containerd[1973]: time="2025-09-13T00:12:15.339613765Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:15.343677 containerd[1973]: time="2025-09-13T00:12:15.342754988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:15.343677 containerd[1973]: time="2025-09-13T00:12:15.343562714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.866861086s" Sep 13 00:12:15.343677 containerd[1973]: time="2025-09-13T00:12:15.343591402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:12:15.344996 containerd[1973]: time="2025-09-13T00:12:15.344962802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:12:15.391069 containerd[1973]: time="2025-09-13T00:12:15.390977700Z" level=info msg="CreateContainer within sandbox \"d8895b30a8e818cf5c470e03c35665799e53ed3b6616254868914a514b81d1c6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:12:15.606009 containerd[1973]: time="2025-09-13T00:12:15.605889502Z" level=info msg="CreateContainer within sandbox \"d8895b30a8e818cf5c470e03c35665799e53ed3b6616254868914a514b81d1c6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9aed474d31a1dd65df5d92fe23b70a45fd0d4f4eb6f7aca4f72136af98bfa054\"" Sep 13 00:12:15.609480 containerd[1973]: time="2025-09-13T00:12:15.607363516Z" level=info msg="StartContainer for \"9aed474d31a1dd65df5d92fe23b70a45fd0d4f4eb6f7aca4f72136af98bfa054\"" Sep 13 00:12:15.775767 systemd[1]: Started cri-containerd-9aed474d31a1dd65df5d92fe23b70a45fd0d4f4eb6f7aca4f72136af98bfa054.scope - libcontainer container 9aed474d31a1dd65df5d92fe23b70a45fd0d4f4eb6f7aca4f72136af98bfa054. Sep 13 00:12:15.878280 containerd[1973]: time="2025-09-13T00:12:15.877914988Z" level=info msg="StartContainer for \"9aed474d31a1dd65df5d92fe23b70a45fd0d4f4eb6f7aca4f72136af98bfa054\" returns successfully" Sep 13 00:12:15.967746 kubelet[3190]: E0913 00:12:15.966962 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:16.126810 kubelet[3190]: E0913 00:12:16.126768 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.126810 kubelet[3190]: W0913 00:12:16.126802 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.127117 kubelet[3190]: E0913 00:12:16.126827 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.127117 kubelet[3190]: E0913 00:12:16.127073 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.127117 kubelet[3190]: W0913 00:12:16.127084 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.127256 kubelet[3190]: E0913 00:12:16.127099 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.127388 kubelet[3190]: E0913 00:12:16.127374 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.127507 kubelet[3190]: W0913 00:12:16.127388 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.127507 kubelet[3190]: E0913 00:12:16.127402 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.127730 kubelet[3190]: E0913 00:12:16.127702 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.127730 kubelet[3190]: W0913 00:12:16.127720 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.129269 kubelet[3190]: E0913 00:12:16.127737 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.129269 kubelet[3190]: E0913 00:12:16.127997 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.129269 kubelet[3190]: W0913 00:12:16.128008 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.129269 kubelet[3190]: E0913 00:12:16.128021 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.129269 kubelet[3190]: E0913 00:12:16.128239 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.129269 kubelet[3190]: W0913 00:12:16.128248 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.129269 kubelet[3190]: E0913 00:12:16.128260 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.130734 kubelet[3190]: E0913 00:12:16.130714 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.130734 kubelet[3190]: W0913 00:12:16.130733 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.130888 kubelet[3190]: E0913 00:12:16.130750 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.130997 kubelet[3190]: E0913 00:12:16.130981 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.131073 kubelet[3190]: W0913 00:12:16.130997 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.131073 kubelet[3190]: E0913 00:12:16.131010 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.131459 kubelet[3190]: E0913 00:12:16.131265 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.131459 kubelet[3190]: W0913 00:12:16.131278 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.131459 kubelet[3190]: E0913 00:12:16.131291 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.131515 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.133484 kubelet[3190]: W0913 00:12:16.131525 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.131538 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.131743 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.133484 kubelet[3190]: W0913 00:12:16.131753 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.131764 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.132097 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.133484 kubelet[3190]: W0913 00:12:16.132108 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.132123 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.133484 kubelet[3190]: E0913 00:12:16.132339 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.133989 kubelet[3190]: W0913 00:12:16.132348 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.133989 kubelet[3190]: E0913 00:12:16.132361 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.133989 kubelet[3190]: E0913 00:12:16.132586 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.133989 kubelet[3190]: W0913 00:12:16.132597 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.133989 kubelet[3190]: E0913 00:12:16.132609 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.133989 kubelet[3190]: E0913 00:12:16.132808 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.133989 kubelet[3190]: W0913 00:12:16.132817 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.133989 kubelet[3190]: E0913 00:12:16.132828 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.179789 kubelet[3190]: E0913 00:12:16.179754 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.179789 kubelet[3190]: W0913 00:12:16.179784 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.180032 kubelet[3190]: E0913 00:12:16.179806 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.180101 kubelet[3190]: E0913 00:12:16.180076 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.180101 kubelet[3190]: W0913 00:12:16.180092 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.180477 kubelet[3190]: E0913 00:12:16.180105 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.180477 kubelet[3190]: E0913 00:12:16.180469 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.181628 kubelet[3190]: W0913 00:12:16.180483 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.181628 kubelet[3190]: E0913 00:12:16.180497 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.181628 kubelet[3190]: E0913 00:12:16.181071 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.181628 kubelet[3190]: W0913 00:12:16.181088 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.181628 kubelet[3190]: E0913 00:12:16.181113 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.182482 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.183448 kubelet[3190]: W0913 00:12:16.182500 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.182516 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.182799 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.183448 kubelet[3190]: W0913 00:12:16.182810 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.182823 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.183093 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.183448 kubelet[3190]: W0913 00:12:16.183115 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.183130 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.183448 kubelet[3190]: E0913 00:12:16.183367 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.183977 kubelet[3190]: W0913 00:12:16.183377 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.183977 kubelet[3190]: E0913 00:12:16.183389 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.183977 kubelet[3190]: E0913 00:12:16.183717 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.183977 kubelet[3190]: W0913 00:12:16.183728 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.183977 kubelet[3190]: E0913 00:12:16.183741 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.184215 kubelet[3190]: E0913 00:12:16.184187 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.184215 kubelet[3190]: W0913 00:12:16.184198 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.184215 kubelet[3190]: E0913 00:12:16.184211 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.184458 kubelet[3190]: E0913 00:12:16.184436 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.184458 kubelet[3190]: W0913 00:12:16.184449 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.184635 kubelet[3190]: E0913 00:12:16.184462 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.185444 kubelet[3190]: E0913 00:12:16.184742 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.185444 kubelet[3190]: W0913 00:12:16.184755 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.185444 kubelet[3190]: E0913 00:12:16.184767 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.185686 kubelet[3190]: E0913 00:12:16.185461 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.185686 kubelet[3190]: W0913 00:12:16.185473 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.185686 kubelet[3190]: E0913 00:12:16.185487 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.186006 kubelet[3190]: E0913 00:12:16.185706 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.186006 kubelet[3190]: W0913 00:12:16.185716 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.186006 kubelet[3190]: E0913 00:12:16.185728 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.186570 kubelet[3190]: E0913 00:12:16.186554 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.186645 kubelet[3190]: W0913 00:12:16.186570 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.186645 kubelet[3190]: E0913 00:12:16.186584 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.186877 kubelet[3190]: E0913 00:12:16.186847 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.186877 kubelet[3190]: W0913 00:12:16.186865 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.187560 kubelet[3190]: E0913 00:12:16.186880 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.187560 kubelet[3190]: E0913 00:12:16.187184 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.187560 kubelet[3190]: W0913 00:12:16.187196 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.187560 kubelet[3190]: E0913 00:12:16.187209 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.187796 kubelet[3190]: E0913 00:12:16.187670 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:12:16.187796 kubelet[3190]: W0913 00:12:16.187681 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:12:16.187796 kubelet[3190]: E0913 00:12:16.187694 3190 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:12:16.780062 containerd[1973]: time="2025-09-13T00:12:16.780016397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:16.781441 containerd[1973]: time="2025-09-13T00:12:16.781387070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:12:16.782781 containerd[1973]: time="2025-09-13T00:12:16.782747537Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:16.785451 containerd[1973]: time="2025-09-13T00:12:16.785378349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:16.786657 containerd[1973]: time="2025-09-13T00:12:16.786571473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.441569901s" Sep 13 00:12:16.787419 containerd[1973]: time="2025-09-13T00:12:16.787316520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:12:16.793623 containerd[1973]: time="2025-09-13T00:12:16.793504210Z" level=info msg="CreateContainer within sandbox \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:12:16.808016 containerd[1973]: time="2025-09-13T00:12:16.807968963Z" level=info msg="CreateContainer within sandbox \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e\"" Sep 13 00:12:16.811062 containerd[1973]: time="2025-09-13T00:12:16.808713839Z" level=info msg="StartContainer for \"9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e\"" Sep 13 00:12:16.866688 systemd[1]: Started cri-containerd-9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e.scope - libcontainer container 9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e. Sep 13 00:12:16.904203 containerd[1973]: time="2025-09-13T00:12:16.904167304Z" level=info msg="StartContainer for \"9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e\" returns successfully" Sep 13 00:12:16.917834 systemd[1]: cri-containerd-9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e.scope: Deactivated successfully. Sep 13 00:12:16.955295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e-rootfs.mount: Deactivated successfully. Sep 13 00:12:16.975735 containerd[1973]: time="2025-09-13T00:12:16.959579387Z" level=info msg="shim disconnected" id=9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e namespace=k8s.io Sep 13 00:12:16.975924 containerd[1973]: time="2025-09-13T00:12:16.975738115Z" level=warning msg="cleaning up after shim disconnected" id=9bbc5a9897553423d7392b154e2a6a61391ff33ba9c5ee74212ffe9374bec93e namespace=k8s.io Sep 13 00:12:16.975924 containerd[1973]: time="2025-09-13T00:12:16.975759745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:12:17.112745 containerd[1973]: time="2025-09-13T00:12:17.112481011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:12:17.135448 kubelet[3190]: I0913 00:12:17.132297 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79df87f4bc-bfkvb" podStartSLOduration=3.263396198 podStartE2EDuration="6.13227366s" podCreationTimestamp="2025-09-13 00:12:11 +0000 UTC" firstStartedPulling="2025-09-13 00:12:12.475647049 +0000 UTC m=+20.716379582" lastFinishedPulling="2025-09-13 00:12:15.344524526 +0000 UTC m=+23.585257044" observedRunningTime="2025-09-13 00:12:16.148301911 +0000 UTC m=+24.389034453" watchObservedRunningTime="2025-09-13 00:12:17.13227366 +0000 UTC m=+25.373006201" Sep 13 00:12:17.974127 kubelet[3190]: E0913 00:12:17.974066 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:19.965370 kubelet[3190]: E0913 00:12:19.965324 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:21.029586 containerd[1973]: time="2025-09-13T00:12:21.029521975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.030705 containerd[1973]: time="2025-09-13T00:12:21.030564801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:12:21.032082 containerd[1973]: time="2025-09-13T00:12:21.031894642Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.034321 containerd[1973]: time="2025-09-13T00:12:21.034289320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.035337 containerd[1973]: time="2025-09-13T00:12:21.034979820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.922464888s" Sep 13 00:12:21.035337 containerd[1973]: time="2025-09-13T00:12:21.035008133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:12:21.039881 containerd[1973]: time="2025-09-13T00:12:21.039841308Z" level=info msg="CreateContainer within sandbox \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:12:21.058219 containerd[1973]: time="2025-09-13T00:12:21.058176003Z" level=info msg="CreateContainer within sandbox \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c\"" Sep 13 00:12:21.059468 containerd[1973]: time="2025-09-13T00:12:21.058854977Z" level=info msg="StartContainer for \"a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c\"" Sep 13 00:12:21.102637 systemd[1]: Started cri-containerd-a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c.scope - libcontainer container a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c. Sep 13 00:12:21.177528 containerd[1973]: time="2025-09-13T00:12:21.177464576Z" level=info msg="StartContainer for \"a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c\" returns successfully" Sep 13 00:12:21.967518 kubelet[3190]: E0913 00:12:21.967460 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:22.127884 systemd[1]: cri-containerd-a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c.scope: Deactivated successfully. Sep 13 00:12:22.228700 kubelet[3190]: I0913 00:12:22.224810 3190 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:12:22.252088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c-rootfs.mount: Deactivated successfully. Sep 13 00:12:22.254782 containerd[1973]: time="2025-09-13T00:12:22.254533036Z" level=info msg="shim disconnected" id=a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c namespace=k8s.io Sep 13 00:12:22.254782 containerd[1973]: time="2025-09-13T00:12:22.254718793Z" level=warning msg="cleaning up after shim disconnected" id=a11848b2ec1940a94c494271c9fde6595db4c7be8212b7db745e59ff04d06f5c namespace=k8s.io Sep 13 00:12:22.254782 containerd[1973]: time="2025-09-13T00:12:22.254733707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:12:22.340536 systemd[1]: Created slice kubepods-besteffort-pod39751ac3_0924_426f_b77b_8d83a90311a5.slice - libcontainer container kubepods-besteffort-pod39751ac3_0924_426f_b77b_8d83a90311a5.slice. Sep 13 00:12:22.354849 systemd[1]: Created slice kubepods-besteffort-podb2cc29fb_8087_44c7_b1ba_93ec8b4c09fd.slice - libcontainer container kubepods-besteffort-podb2cc29fb_8087_44c7_b1ba_93ec8b4c09fd.slice. Sep 13 00:12:22.364748 systemd[1]: Created slice kubepods-burstable-poda3bb9403_ed1c_46f5_b8ac_f060a9d357fb.slice - libcontainer container kubepods-burstable-poda3bb9403_ed1c_46f5_b8ac_f060a9d357fb.slice. Sep 13 00:12:22.377206 systemd[1]: Created slice kubepods-burstable-podc13143f6_e5dc_44d2_a955_cb02eb84e81b.slice - libcontainer container kubepods-burstable-podc13143f6_e5dc_44d2_a955_cb02eb84e81b.slice. Sep 13 00:12:22.418052 systemd[1]: Created slice kubepods-besteffort-podcc1327ab_9550_47ae_bd83_f41e2cea6ca2.slice - libcontainer container kubepods-besteffort-podcc1327ab_9550_47ae_bd83_f41e2cea6ca2.slice. Sep 13 00:12:22.431325 kubelet[3190]: I0913 00:12:22.430992 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c13143f6-e5dc-44d2-a955-cb02eb84e81b-config-volume\") pod \"coredns-674b8bbfcf-pc25k\" (UID: \"c13143f6-e5dc-44d2-a955-cb02eb84e81b\") " pod="kube-system/coredns-674b8bbfcf-pc25k" Sep 13 00:12:22.431325 kubelet[3190]: I0913 00:12:22.431040 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hll\" (UniqueName: \"kubernetes.io/projected/a3bb9403-ed1c-46f5-b8ac-f060a9d357fb-kube-api-access-k9hll\") pod \"coredns-674b8bbfcf-96m5z\" (UID: \"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb\") " pod="kube-system/coredns-674b8bbfcf-96m5z" Sep 13 00:12:22.431325 kubelet[3190]: I0913 00:12:22.431085 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39751ac3-0924-426f-b77b-8d83a90311a5-calico-apiserver-certs\") pod \"calico-apiserver-856f469597-zpqrf\" (UID: \"39751ac3-0924-426f-b77b-8d83a90311a5\") " pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" Sep 13 00:12:22.431325 kubelet[3190]: I0913 00:12:22.431119 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdwc\" (UniqueName: \"kubernetes.io/projected/c13143f6-e5dc-44d2-a955-cb02eb84e81b-kube-api-access-kcdwc\") pod \"coredns-674b8bbfcf-pc25k\" (UID: \"c13143f6-e5dc-44d2-a955-cb02eb84e81b\") " pod="kube-system/coredns-674b8bbfcf-pc25k" Sep 13 00:12:22.431325 kubelet[3190]: I0913 00:12:22.431149 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6d64\" (UniqueName: \"kubernetes.io/projected/39751ac3-0924-426f-b77b-8d83a90311a5-kube-api-access-t6d64\") pod \"calico-apiserver-856f469597-zpqrf\" (UID: \"39751ac3-0924-426f-b77b-8d83a90311a5\") " pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" Sep 13 00:12:22.431763 kubelet[3190]: I0913 00:12:22.431179 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3bb9403-ed1c-46f5-b8ac-f060a9d357fb-config-volume\") pod \"coredns-674b8bbfcf-96m5z\" (UID: \"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb\") " pod="kube-system/coredns-674b8bbfcf-96m5z" Sep 13 00:12:22.431763 kubelet[3190]: I0913 00:12:22.431214 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxn2\" (UniqueName: \"kubernetes.io/projected/b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd-kube-api-access-6kxn2\") pod \"calico-kube-controllers-5448c99594-x6ss9\" (UID: \"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd\") " pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" Sep 13 00:12:22.431763 kubelet[3190]: I0913 00:12:22.431245 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd-tigera-ca-bundle\") pod \"calico-kube-controllers-5448c99594-x6ss9\" (UID: \"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd\") " pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" Sep 13 00:12:22.439629 systemd[1]: Created slice kubepods-besteffort-pode424bb58_dcec_4399_856b_a30a0d9831b0.slice - libcontainer container kubepods-besteffort-pode424bb58_dcec_4399_856b_a30a0d9831b0.slice. Sep 13 00:12:22.450207 systemd[1]: Created slice kubepods-besteffort-poda10460e3_a105_4708_a82a_060d5d67b42e.slice - libcontainer container kubepods-besteffort-poda10460e3_a105_4708_a82a_060d5d67b42e.slice. Sep 13 00:12:22.534729 kubelet[3190]: I0913 00:12:22.531861 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-ca-bundle\") pod \"whisker-74b4d94f48-g9zgz\" (UID: \"a10460e3-a105-4708-a82a-060d5d67b42e\") " pod="calico-system/whisker-74b4d94f48-g9zgz" Sep 13 00:12:22.534729 kubelet[3190]: I0913 00:12:22.531958 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrf7\" (UniqueName: \"kubernetes.io/projected/a10460e3-a105-4708-a82a-060d5d67b42e-kube-api-access-dmrf7\") pod \"whisker-74b4d94f48-g9zgz\" (UID: \"a10460e3-a105-4708-a82a-060d5d67b42e\") " pod="calico-system/whisker-74b4d94f48-g9zgz" Sep 13 00:12:22.534729 kubelet[3190]: I0913 00:12:22.532005 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e424bb58-dcec-4399-856b-a30a0d9831b0-calico-apiserver-certs\") pod \"calico-apiserver-856f469597-bn26z\" (UID: \"e424bb58-dcec-4399-856b-a30a0d9831b0\") " pod="calico-apiserver/calico-apiserver-856f469597-bn26z" Sep 13 00:12:22.534729 kubelet[3190]: I0913 00:12:22.532033 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1327ab-9550-47ae-bd83-f41e2cea6ca2-config\") pod \"goldmane-54d579b49d-l7k22\" (UID: \"cc1327ab-9550-47ae-bd83-f41e2cea6ca2\") " pod="calico-system/goldmane-54d579b49d-l7k22" Sep 13 00:12:22.534729 kubelet[3190]: I0913 00:12:22.532065 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc1327ab-9550-47ae-bd83-f41e2cea6ca2-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-l7k22\" (UID: \"cc1327ab-9550-47ae-bd83-f41e2cea6ca2\") " pod="calico-system/goldmane-54d579b49d-l7k22" Sep 13 00:12:22.535254 kubelet[3190]: I0913 00:12:22.532089 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc1327ab-9550-47ae-bd83-f41e2cea6ca2-goldmane-key-pair\") pod \"goldmane-54d579b49d-l7k22\" (UID: \"cc1327ab-9550-47ae-bd83-f41e2cea6ca2\") " pod="calico-system/goldmane-54d579b49d-l7k22" Sep 13 00:12:22.535254 kubelet[3190]: I0913 00:12:22.532143 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtm8\" (UniqueName: \"kubernetes.io/projected/e424bb58-dcec-4399-856b-a30a0d9831b0-kube-api-access-zmtm8\") pod \"calico-apiserver-856f469597-bn26z\" (UID: \"e424bb58-dcec-4399-856b-a30a0d9831b0\") " pod="calico-apiserver/calico-apiserver-856f469597-bn26z" Sep 13 00:12:22.535254 kubelet[3190]: I0913 00:12:22.532168 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhlp9\" (UniqueName: \"kubernetes.io/projected/cc1327ab-9550-47ae-bd83-f41e2cea6ca2-kube-api-access-jhlp9\") pod \"goldmane-54d579b49d-l7k22\" (UID: \"cc1327ab-9550-47ae-bd83-f41e2cea6ca2\") " pod="calico-system/goldmane-54d579b49d-l7k22" Sep 13 00:12:22.535254 kubelet[3190]: I0913 00:12:22.532212 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-backend-key-pair\") pod \"whisker-74b4d94f48-g9zgz\" (UID: \"a10460e3-a105-4708-a82a-060d5d67b42e\") " pod="calico-system/whisker-74b4d94f48-g9zgz" Sep 13 00:12:22.660730 containerd[1973]: time="2025-09-13T00:12:22.659460440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5448c99594-x6ss9,Uid:b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:22.661537 containerd[1973]: time="2025-09-13T00:12:22.661493658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-zpqrf,Uid:39751ac3-0924-426f-b77b-8d83a90311a5,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:12:22.673001 containerd[1973]: time="2025-09-13T00:12:22.672957907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96m5z,Uid:a3bb9403-ed1c-46f5-b8ac-f060a9d357fb,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:22.685454 containerd[1973]: time="2025-09-13T00:12:22.685399817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pc25k,Uid:c13143f6-e5dc-44d2-a955-cb02eb84e81b,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:22.738402 containerd[1973]: time="2025-09-13T00:12:22.738365496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-l7k22,Uid:cc1327ab-9550-47ae-bd83-f41e2cea6ca2,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:22.746105 containerd[1973]: time="2025-09-13T00:12:22.745813618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-bn26z,Uid:e424bb58-dcec-4399-856b-a30a0d9831b0,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:12:22.761632 containerd[1973]: time="2025-09-13T00:12:22.761586983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b4d94f48-g9zgz,Uid:a10460e3-a105-4708-a82a-060d5d67b42e,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:23.136190 containerd[1973]: time="2025-09-13T00:12:23.136058993Z" level=error msg="Failed to destroy network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.137355 containerd[1973]: time="2025-09-13T00:12:23.137312802Z" level=error msg="Failed to destroy network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.143343 containerd[1973]: time="2025-09-13T00:12:23.143181317Z" level=error msg="encountered an error cleaning up failed sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.143343 containerd[1973]: time="2025-09-13T00:12:23.143268945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5448c99594-x6ss9,Uid:b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.144987 containerd[1973]: time="2025-09-13T00:12:23.144642997Z" level=error msg="encountered an error cleaning up failed sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.144987 containerd[1973]: time="2025-09-13T00:12:23.144898812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b4d94f48-g9zgz,Uid:a10460e3-a105-4708-a82a-060d5d67b42e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.149955101Z" level=error msg="Failed to destroy network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.150511600Z" level=error msg="encountered an error cleaning up failed sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.150560695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-bn26z,Uid:e424bb58-dcec-4399-856b-a30a0d9831b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.150900428Z" level=error msg="Failed to destroy network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.151003298Z" level=error msg="Failed to destroy network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.151851705Z" level=error msg="encountered an error cleaning up failed sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.151899749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96m5z,Uid:a3bb9403-ed1c-46f5-b8ac-f060a9d357fb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.153247965Z" level=error msg="encountered an error cleaning up failed sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.153307768Z" level=error msg="Failed to destroy network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.153632696Z" level=error msg="encountered an error cleaning up failed sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.153694120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-l7k22,Uid:cc1327ab-9550-47ae-bd83-f41e2cea6ca2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.153316690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-zpqrf,Uid:39751ac3-0924-426f-b77b-8d83a90311a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.158479419Z" level=error msg="Failed to destroy network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178347 containerd[1973]: time="2025-09-13T00:12:23.159013395Z" level=error msg="encountered an error cleaning up failed sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178917 kubelet[3190]: E0913 00:12:23.151403 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178917 kubelet[3190]: E0913 00:12:23.151574 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" Sep 13 00:12:23.178917 kubelet[3190]: E0913 00:12:23.151472 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.178917 kubelet[3190]: E0913 00:12:23.151669 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b4d94f48-g9zgz" Sep 13 00:12:23.179691 containerd[1973]: time="2025-09-13T00:12:23.159064159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pc25k,Uid:c13143f6-e5dc-44d2-a955-cb02eb84e81b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.179759 kubelet[3190]: E0913 00:12:23.151694 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b4d94f48-g9zgz" Sep 13 00:12:23.179759 kubelet[3190]: E0913 00:12:23.151753 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74b4d94f48-g9zgz_calico-system(a10460e3-a105-4708-a82a-060d5d67b42e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74b4d94f48-g9zgz_calico-system(a10460e3-a105-4708-a82a-060d5d67b42e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74b4d94f48-g9zgz" podUID="a10460e3-a105-4708-a82a-060d5d67b42e" Sep 13 00:12:23.179759 kubelet[3190]: E0913 00:12:23.151611 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" Sep 13 00:12:23.179955 kubelet[3190]: E0913 00:12:23.152050 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5448c99594-x6ss9_calico-system(b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5448c99594-x6ss9_calico-system(b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" podUID="b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd" Sep 13 00:12:23.179955 kubelet[3190]: E0913 00:12:23.151510 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.179955 kubelet[3190]: E0913 00:12:23.152088 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-856f469597-bn26z" Sep 13 00:12:23.180127 kubelet[3190]: E0913 00:12:23.152104 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-856f469597-bn26z" Sep 13 00:12:23.180127 kubelet[3190]: E0913 00:12:23.152128 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-856f469597-bn26z_calico-apiserver(e424bb58-dcec-4399-856b-a30a0d9831b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-856f469597-bn26z_calico-apiserver(e424bb58-dcec-4399-856b-a30a0d9831b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-856f469597-bn26z" podUID="e424bb58-dcec-4399-856b-a30a0d9831b0" Sep 13 00:12:23.180127 kubelet[3190]: E0913 00:12:23.153475 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.180297 kubelet[3190]: E0913 00:12:23.153515 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-96m5z" Sep 13 00:12:23.180297 kubelet[3190]: E0913 00:12:23.153534 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-96m5z" Sep 13 00:12:23.180297 kubelet[3190]: E0913 00:12:23.153706 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-96m5z_kube-system(a3bb9403-ed1c-46f5-b8ac-f060a9d357fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-96m5z_kube-system(a3bb9403-ed1c-46f5-b8ac-f060a9d357fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-96m5z" podUID="a3bb9403-ed1c-46f5-b8ac-f060a9d357fb" Sep 13 00:12:23.180467 kubelet[3190]: E0913 00:12:23.153822 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.180467 kubelet[3190]: E0913 00:12:23.153843 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-l7k22" Sep 13 00:12:23.180467 kubelet[3190]: E0913 00:12:23.153856 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-l7k22" Sep 13 00:12:23.180555 kubelet[3190]: E0913 00:12:23.153883 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-l7k22_calico-system(cc1327ab-9550-47ae-bd83-f41e2cea6ca2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-l7k22_calico-system(cc1327ab-9550-47ae-bd83-f41e2cea6ca2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-l7k22" podUID="cc1327ab-9550-47ae-bd83-f41e2cea6ca2" Sep 13 00:12:23.180555 kubelet[3190]: E0913 00:12:23.153983 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.180555 kubelet[3190]: E0913 00:12:23.154001 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" Sep 13 00:12:23.180751 kubelet[3190]: E0913 00:12:23.154052 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" Sep 13 00:12:23.180751 kubelet[3190]: E0913 00:12:23.154097 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-856f469597-zpqrf_calico-apiserver(39751ac3-0924-426f-b77b-8d83a90311a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-856f469597-zpqrf_calico-apiserver(39751ac3-0924-426f-b77b-8d83a90311a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" podUID="39751ac3-0924-426f-b77b-8d83a90311a5" Sep 13 00:12:23.180751 kubelet[3190]: I0913 00:12:23.156265 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:23.180751 kubelet[3190]: I0913 00:12:23.158874 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:23.185673 kubelet[3190]: E0913 00:12:23.185629 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.185791 kubelet[3190]: E0913 00:12:23.185684 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pc25k" Sep 13 00:12:23.185791 kubelet[3190]: E0913 00:12:23.185704 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pc25k" Sep 13 00:12:23.185791 kubelet[3190]: E0913 00:12:23.185753 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pc25k_kube-system(c13143f6-e5dc-44d2-a955-cb02eb84e81b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pc25k_kube-system(c13143f6-e5dc-44d2-a955-cb02eb84e81b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pc25k" podUID="c13143f6-e5dc-44d2-a955-cb02eb84e81b" Sep 13 00:12:23.222934 kubelet[3190]: I0913 00:12:23.222513 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:23.223213 containerd[1973]: time="2025-09-13T00:12:23.222562756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:12:23.238458 containerd[1973]: time="2025-09-13T00:12:23.238177660Z" level=info msg="StopPodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\"" Sep 13 00:12:23.244312 containerd[1973]: time="2025-09-13T00:12:23.243654602Z" level=info msg="Ensure that sandbox 7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21 in task-service has been cleanup successfully" Sep 13 00:12:23.260570 containerd[1973]: time="2025-09-13T00:12:23.260519581Z" level=info msg="StopPodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\"" Sep 13 00:12:23.260978 containerd[1973]: time="2025-09-13T00:12:23.260762700Z" level=info msg="Ensure that sandbox 9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b in task-service has been cleanup successfully" Sep 13 00:12:23.283336 containerd[1973]: time="2025-09-13T00:12:23.283229927Z" level=info msg="StopPodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\"" Sep 13 00:12:23.283645 containerd[1973]: time="2025-09-13T00:12:23.283466390Z" level=info msg="Ensure that sandbox cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e in task-service has been cleanup successfully" Sep 13 00:12:23.373014 containerd[1973]: time="2025-09-13T00:12:23.372738693Z" level=error msg="StopPodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" failed" error="failed to destroy network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.373174 kubelet[3190]: E0913 00:12:23.373049 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:23.373486 kubelet[3190]: E0913 00:12:23.373221 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21"} Sep 13 00:12:23.373486 kubelet[3190]: E0913 00:12:23.373390 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e424bb58-dcec-4399-856b-a30a0d9831b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:23.374453 kubelet[3190]: E0913 00:12:23.373501 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e424bb58-dcec-4399-856b-a30a0d9831b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-856f469597-bn26z" podUID="e424bb58-dcec-4399-856b-a30a0d9831b0" Sep 13 00:12:23.393483 containerd[1973]: time="2025-09-13T00:12:23.392184020Z" level=error msg="StopPodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" failed" error="failed to destroy network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.393615 kubelet[3190]: E0913 00:12:23.392419 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:23.393615 kubelet[3190]: E0913 00:12:23.392495 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b"} Sep 13 00:12:23.393615 kubelet[3190]: E0913 00:12:23.392539 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:23.393615 kubelet[3190]: E0913 00:12:23.392569 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" podUID="b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd" Sep 13 00:12:23.404455 containerd[1973]: time="2025-09-13T00:12:23.404399276Z" level=error msg="StopPodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" failed" error="failed to destroy network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:23.404874 kubelet[3190]: E0913 00:12:23.404830 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:23.404985 kubelet[3190]: E0913 00:12:23.404884 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e"} Sep 13 00:12:23.404985 kubelet[3190]: E0913 00:12:23.404933 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a10460e3-a105-4708-a82a-060d5d67b42e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:23.404985 kubelet[3190]: E0913 00:12:23.404964 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a10460e3-a105-4708-a82a-060d5d67b42e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74b4d94f48-g9zgz" podUID="a10460e3-a105-4708-a82a-060d5d67b42e" Sep 13 00:12:23.976466 systemd[1]: Created slice kubepods-besteffort-pod7bbf7a35_8d3f_406b_8fc5_674202715e39.slice - libcontainer container kubepods-besteffort-pod7bbf7a35_8d3f_406b_8fc5_674202715e39.slice. Sep 13 00:12:23.980666 containerd[1973]: time="2025-09-13T00:12:23.980104864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6h8z,Uid:7bbf7a35-8d3f-406b-8fc5-674202715e39,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:24.074615 containerd[1973]: time="2025-09-13T00:12:24.074546804Z" level=error msg="Failed to destroy network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.075918 containerd[1973]: time="2025-09-13T00:12:24.075864901Z" level=error msg="encountered an error cleaning up failed sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.076018 containerd[1973]: time="2025-09-13T00:12:24.075943475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6h8z,Uid:7bbf7a35-8d3f-406b-8fc5-674202715e39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.078289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8-shm.mount: Deactivated successfully. Sep 13 00:12:24.080016 kubelet[3190]: E0913 00:12:24.078575 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.080016 kubelet[3190]: E0913 00:12:24.078639 3190 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:24.080016 kubelet[3190]: E0913 00:12:24.078670 3190 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6h8z" Sep 13 00:12:24.080252 kubelet[3190]: E0913 00:12:24.078731 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6h8z_calico-system(7bbf7a35-8d3f-406b-8fc5-674202715e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6h8z_calico-system(7bbf7a35-8d3f-406b-8fc5-674202715e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:24.226248 kubelet[3190]: I0913 00:12:24.226219 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:24.228607 containerd[1973]: time="2025-09-13T00:12:24.227664372Z" level=info msg="StopPodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\"" Sep 13 00:12:24.228607 containerd[1973]: time="2025-09-13T00:12:24.228508678Z" level=info msg="StopPodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\"" Sep 13 00:12:24.228767 kubelet[3190]: I0913 00:12:24.227916 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:24.228835 containerd[1973]: time="2025-09-13T00:12:24.228674019Z" level=info msg="Ensure that sandbox 1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8 in task-service has been cleanup successfully" Sep 13 00:12:24.231101 containerd[1973]: time="2025-09-13T00:12:24.228513144Z" level=info msg="Ensure that sandbox 4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05 in task-service has been cleanup successfully" Sep 13 00:12:24.233092 kubelet[3190]: I0913 00:12:24.233063 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:24.235222 containerd[1973]: time="2025-09-13T00:12:24.234846252Z" level=info msg="StopPodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\"" Sep 13 00:12:24.236123 containerd[1973]: time="2025-09-13T00:12:24.236093871Z" level=info msg="Ensure that sandbox 853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c in task-service has been cleanup successfully" Sep 13 00:12:24.239197 kubelet[3190]: I0913 00:12:24.239120 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:24.243774 containerd[1973]: time="2025-09-13T00:12:24.243725613Z" level=info msg="StopPodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\"" Sep 13 00:12:24.244016 containerd[1973]: time="2025-09-13T00:12:24.243923273Z" level=info msg="Ensure that sandbox 2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e in task-service has been cleanup successfully" Sep 13 00:12:24.246494 kubelet[3190]: I0913 00:12:24.246207 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:24.248714 containerd[1973]: time="2025-09-13T00:12:24.248591618Z" level=info msg="StopPodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\"" Sep 13 00:12:24.249014 containerd[1973]: time="2025-09-13T00:12:24.248833466Z" level=info msg="Ensure that sandbox f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e in task-service has been cleanup successfully" Sep 13 00:12:24.355756 containerd[1973]: time="2025-09-13T00:12:24.355619242Z" level=error msg="StopPodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" failed" error="failed to destroy network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.357756 kubelet[3190]: E0913 00:12:24.356311 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:24.357756 kubelet[3190]: E0913 00:12:24.357312 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c"} Sep 13 00:12:24.357756 kubelet[3190]: E0913 00:12:24.357388 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:24.357756 kubelet[3190]: E0913 00:12:24.357449 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-96m5z" podUID="a3bb9403-ed1c-46f5-b8ac-f060a9d357fb" Sep 13 00:12:24.364444 containerd[1973]: time="2025-09-13T00:12:24.363962619Z" level=error msg="StopPodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" failed" error="failed to destroy network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.364593 kubelet[3190]: E0913 00:12:24.364220 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:24.364593 kubelet[3190]: E0913 00:12:24.364274 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e"} Sep 13 00:12:24.364593 kubelet[3190]: E0913 00:12:24.364313 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c13143f6-e5dc-44d2-a955-cb02eb84e81b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:24.364593 kubelet[3190]: E0913 00:12:24.364342 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c13143f6-e5dc-44d2-a955-cb02eb84e81b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pc25k" podUID="c13143f6-e5dc-44d2-a955-cb02eb84e81b" Sep 13 00:12:24.367471 containerd[1973]: time="2025-09-13T00:12:24.366626991Z" level=error msg="StopPodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" failed" error="failed to destroy network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.367893 kubelet[3190]: E0913 00:12:24.367849 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:24.368019 kubelet[3190]: E0913 00:12:24.367913 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8"} Sep 13 00:12:24.368019 kubelet[3190]: E0913 00:12:24.367956 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bbf7a35-8d3f-406b-8fc5-674202715e39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:24.368019 kubelet[3190]: E0913 00:12:24.367988 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bbf7a35-8d3f-406b-8fc5-674202715e39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6h8z" podUID="7bbf7a35-8d3f-406b-8fc5-674202715e39" Sep 13 00:12:24.372830 containerd[1973]: time="2025-09-13T00:12:24.372681711Z" level=error msg="StopPodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" failed" error="failed to destroy network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.373155 containerd[1973]: time="2025-09-13T00:12:24.373071853Z" level=error msg="StopPodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" failed" error="failed to destroy network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:12:24.373417 kubelet[3190]: E0913 00:12:24.373376 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:24.373552 kubelet[3190]: E0913 00:12:24.373462 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e"} Sep 13 00:12:24.373601 kubelet[3190]: E0913 00:12:24.373546 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc1327ab-9550-47ae-bd83-f41e2cea6ca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:24.373714 kubelet[3190]: E0913 00:12:24.373620 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc1327ab-9550-47ae-bd83-f41e2cea6ca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-l7k22" podUID="cc1327ab-9550-47ae-bd83-f41e2cea6ca2" Sep 13 00:12:24.373714 kubelet[3190]: E0913 00:12:24.373389 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:24.373714 kubelet[3190]: E0913 00:12:24.373695 3190 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05"} Sep 13 00:12:24.373886 kubelet[3190]: E0913 00:12:24.373727 3190 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39751ac3-0924-426f-b77b-8d83a90311a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:12:24.373886 kubelet[3190]: E0913 00:12:24.373783 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39751ac3-0924-426f-b77b-8d83a90311a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" podUID="39751ac3-0924-426f-b77b-8d83a90311a5" Sep 13 00:12:30.821490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356784270.mount: Deactivated successfully. Sep 13 00:12:30.924481 containerd[1973]: time="2025-09-13T00:12:30.921593451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:12:30.925420 containerd[1973]: time="2025-09-13T00:12:30.917688066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:30.977093 containerd[1973]: time="2025-09-13T00:12:30.977039268Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:30.979591 containerd[1973]: time="2025-09-13T00:12:30.979516701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:30.980682 containerd[1973]: time="2025-09-13T00:12:30.980252500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.757070858s" Sep 13 00:12:30.980682 containerd[1973]: time="2025-09-13T00:12:30.980371461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:12:31.022759 containerd[1973]: time="2025-09-13T00:12:31.022714359Z" level=info msg="CreateContainer within sandbox \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:12:31.138417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785014288.mount: Deactivated successfully. Sep 13 00:12:31.163142 containerd[1973]: time="2025-09-13T00:12:31.162970177Z" level=info msg="CreateContainer within sandbox \"b73eb127fb51fa3561fa3484ca3e5ed37875ee6f170ebd94b6e69893b7aecfbe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fbc2b5bd41f98d8c93eb37822f9985d632fce2eb9531bacfa9bbe277767454c3\"" Sep 13 00:12:31.174390 containerd[1973]: time="2025-09-13T00:12:31.173711921Z" level=info msg="StartContainer for \"fbc2b5bd41f98d8c93eb37822f9985d632fce2eb9531bacfa9bbe277767454c3\"" Sep 13 00:12:31.311613 systemd[1]: Started cri-containerd-fbc2b5bd41f98d8c93eb37822f9985d632fce2eb9531bacfa9bbe277767454c3.scope - libcontainer container fbc2b5bd41f98d8c93eb37822f9985d632fce2eb9531bacfa9bbe277767454c3. Sep 13 00:12:31.376278 containerd[1973]: time="2025-09-13T00:12:31.375977899Z" level=info msg="StartContainer for \"fbc2b5bd41f98d8c93eb37822f9985d632fce2eb9531bacfa9bbe277767454c3\" returns successfully" Sep 13 00:12:31.890610 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:12:31.927200 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:12:34.137273 kubelet[3190]: I0913 00:12:34.101940 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gfd9d" podStartSLOduration=3.732822443 podStartE2EDuration="22.075592109s" podCreationTimestamp="2025-09-13 00:12:12 +0000 UTC" firstStartedPulling="2025-09-13 00:12:12.644378914 +0000 UTC m=+20.885111445" lastFinishedPulling="2025-09-13 00:12:30.98714859 +0000 UTC m=+39.227881111" observedRunningTime="2025-09-13 00:12:32.366518051 +0000 UTC m=+40.607250595" watchObservedRunningTime="2025-09-13 00:12:34.075592109 +0000 UTC m=+42.316324650" Sep 13 00:12:34.148272 containerd[1973]: time="2025-09-13T00:12:34.147925012Z" level=info msg="StopPodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\"" Sep 13 00:12:34.187886 kernel: bpftool[4762]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:12:34.607410 (udev-worker)[4810]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:12:34.614374 systemd-networkd[1817]: vxlan.calico: Link UP Sep 13 00:12:34.614387 systemd-networkd[1817]: vxlan.calico: Gained carrier Sep 13 00:12:34.657918 (udev-worker)[4560]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:12:34.661007 (udev-worker)[4824]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:12:34.968995 containerd[1973]: time="2025-09-13T00:12:34.967292362Z" level=info msg="StopPodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\"" Sep 13 00:12:34.968995 containerd[1973]: time="2025-09-13T00:12:34.967974816Z" level=info msg="StopPodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\"" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.102 [INFO][4864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.102 [INFO][4864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" iface="eth0" netns="/var/run/netns/cni-71c1c80f-45ed-4c8f-896e-b619ae93a494" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.103 [INFO][4864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" iface="eth0" netns="/var/run/netns/cni-71c1c80f-45ed-4c8f-896e-b619ae93a494" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.104 [INFO][4864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" iface="eth0" netns="/var/run/netns/cni-71c1c80f-45ed-4c8f-896e-b619ae93a494" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.104 [INFO][4864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.104 [INFO][4864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.222 [INFO][4888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.225 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.225 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.250 [WARNING][4888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.250 [INFO][4888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.253 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:35.262828 containerd[1973]: 2025-09-13 00:12:35.257 [INFO][4864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:35.270276 systemd[1]: run-netns-cni\x2d71c1c80f\x2d45ed\x2d4c8f\x2d896e\x2db619ae93a494.mount: Deactivated successfully. Sep 13 00:12:35.284042 containerd[1973]: time="2025-09-13T00:12:35.283931169Z" level=info msg="TearDown network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" successfully" Sep 13 00:12:35.284042 containerd[1973]: time="2025-09-13T00:12:35.283993631Z" level=info msg="StopPodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" returns successfully" Sep 13 00:12:35.291855 containerd[1973]: time="2025-09-13T00:12:35.291635988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-bn26z,Uid:e424bb58-dcec-4399-856b-a30a0d9831b0,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:34.310 [INFO][4752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:34.312 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" iface="eth0" netns="/var/run/netns/cni-d4c5d0c6-cc72-767f-bb6a-b9d001b5bc8a" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:34.315 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" iface="eth0" netns="/var/run/netns/cni-d4c5d0c6-cc72-767f-bb6a-b9d001b5bc8a" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:34.316 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" iface="eth0" netns="/var/run/netns/cni-d4c5d0c6-cc72-767f-bb6a-b9d001b5bc8a" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:34.317 [INFO][4752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:34.317 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.223 [INFO][4767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.225 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.253 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.269 [WARNING][4767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.271 [INFO][4767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.277 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:35.293342 containerd[1973]: 2025-09-13 00:12:35.286 [INFO][4752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:35.295402 containerd[1973]: time="2025-09-13T00:12:35.294558463Z" level=info msg="TearDown network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" successfully" Sep 13 00:12:35.295402 containerd[1973]: time="2025-09-13T00:12:35.294587500Z" level=info msg="StopPodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" returns successfully" Sep 13 00:12:35.302854 systemd[1]: run-netns-cni\x2dd4c5d0c6\x2dcc72\x2d767f\x2dbb6a\x2db9d001b5bc8a.mount: Deactivated successfully. Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.167 [INFO][4865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.167 [INFO][4865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" iface="eth0" netns="/var/run/netns/cni-46c20dee-f44b-f071-c9bd-dd2f1d53a672" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.169 [INFO][4865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" iface="eth0" netns="/var/run/netns/cni-46c20dee-f44b-f071-c9bd-dd2f1d53a672" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.170 [INFO][4865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" iface="eth0" netns="/var/run/netns/cni-46c20dee-f44b-f071-c9bd-dd2f1d53a672" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.170 [INFO][4865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.170 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.232 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.233 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.277 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.294 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.294 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.301 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:35.310532 containerd[1973]: 2025-09-13 00:12:35.306 [INFO][4865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:35.314265 containerd[1973]: time="2025-09-13T00:12:35.310533968Z" level=info msg="TearDown network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" successfully" Sep 13 00:12:35.314265 containerd[1973]: time="2025-09-13T00:12:35.310568140Z" level=info msg="StopPodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" returns successfully" Sep 13 00:12:35.314265 containerd[1973]: time="2025-09-13T00:12:35.311553757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-zpqrf,Uid:39751ac3-0924-426f-b77b-8d83a90311a5,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:12:35.313120 systemd[1]: run-netns-cni\x2d46c20dee\x2df44b\x2df071\x2dc9bd\x2ddd2f1d53a672.mount: Deactivated successfully. Sep 13 00:12:35.525995 kubelet[3190]: I0913 00:12:35.525874 3190 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-backend-key-pair\") pod \"a10460e3-a105-4708-a82a-060d5d67b42e\" (UID: \"a10460e3-a105-4708-a82a-060d5d67b42e\") " Sep 13 00:12:35.525995 kubelet[3190]: I0913 00:12:35.525933 3190 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmrf7\" (UniqueName: \"kubernetes.io/projected/a10460e3-a105-4708-a82a-060d5d67b42e-kube-api-access-dmrf7\") pod \"a10460e3-a105-4708-a82a-060d5d67b42e\" (UID: \"a10460e3-a105-4708-a82a-060d5d67b42e\") " Sep 13 00:12:35.525995 kubelet[3190]: I0913 00:12:35.525989 3190 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-ca-bundle\") pod \"a10460e3-a105-4708-a82a-060d5d67b42e\" (UID: \"a10460e3-a105-4708-a82a-060d5d67b42e\") " Sep 13 00:12:35.588375 kubelet[3190]: I0913 00:12:35.562789 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a10460e3-a105-4708-a82a-060d5d67b42e" (UID: "a10460e3-a105-4708-a82a-060d5d67b42e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:12:35.626636 kubelet[3190]: I0913 00:12:35.626594 3190 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-ca-bundle\") on node \"ip-172-31-31-45\" DevicePath \"\"" Sep 13 00:12:35.629900 systemd[1]: Started sshd@7-172.31.31.45:22-139.178.89.65:38120.service - OpenSSH per-connection server daemon (139.178.89.65:38120). Sep 13 00:12:35.643073 kubelet[3190]: I0913 00:12:35.643035 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10460e3-a105-4708-a82a-060d5d67b42e-kube-api-access-dmrf7" (OuterVolumeSpecName: "kube-api-access-dmrf7") pod "a10460e3-a105-4708-a82a-060d5d67b42e" (UID: "a10460e3-a105-4708-a82a-060d5d67b42e"). InnerVolumeSpecName "kube-api-access-dmrf7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:12:35.658017 kubelet[3190]: I0913 00:12:35.643223 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a10460e3-a105-4708-a82a-060d5d67b42e" (UID: "a10460e3-a105-4708-a82a-060d5d67b42e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:12:35.645816 systemd[1]: var-lib-kubelet-pods-a10460e3\x2da105\x2d4708\x2da82a\x2d060d5d67b42e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmrf7.mount: Deactivated successfully. Sep 13 00:12:35.652745 systemd[1]: var-lib-kubelet-pods-a10460e3\x2da105\x2d4708\x2da82a\x2d060d5d67b42e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:12:35.793969 kubelet[3190]: I0913 00:12:35.793807 3190 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a10460e3-a105-4708-a82a-060d5d67b42e-whisker-backend-key-pair\") on node \"ip-172-31-31-45\" DevicePath \"\"" Sep 13 00:12:35.793969 kubelet[3190]: I0913 00:12:35.793842 3190 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmrf7\" (UniqueName: \"kubernetes.io/projected/a10460e3-a105-4708-a82a-060d5d67b42e-kube-api-access-dmrf7\") on node \"ip-172-31-31-45\" DevicePath \"\"" Sep 13 00:12:35.798780 systemd[1]: Removed slice kubepods-besteffort-poda10460e3_a105_4708_a82a_060d5d67b42e.slice - libcontainer container kubepods-besteffort-poda10460e3_a105_4708_a82a_060d5d67b42e.slice. Sep 13 00:12:35.808723 systemd-networkd[1817]: vxlan.calico: Gained IPv6LL Sep 13 00:12:35.971707 containerd[1973]: time="2025-09-13T00:12:35.971663575Z" level=info msg="StopPodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\"" Sep 13 00:12:35.980461 kubelet[3190]: I0913 00:12:35.980380 3190 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10460e3-a105-4708-a82a-060d5d67b42e" path="/var/lib/kubelet/pods/a10460e3-a105-4708-a82a-060d5d67b42e/volumes" Sep 13 00:12:36.035384 systemd[1]: Created slice kubepods-besteffort-poda864767b_7de4_4ec1_8721_7259f1395851.slice - libcontainer container kubepods-besteffort-poda864767b_7de4_4ec1_8721_7259f1395851.slice. Sep 13 00:12:36.089610 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 38120 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:12:36.096992 kubelet[3190]: I0913 00:12:36.096079 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a864767b-7de4-4ec1-8721-7259f1395851-whisker-backend-key-pair\") pod \"whisker-7c94df5fc9-qtgd9\" (UID: \"a864767b-7de4-4ec1-8721-7259f1395851\") " pod="calico-system/whisker-7c94df5fc9-qtgd9" Sep 13 00:12:36.096725 sshd[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:36.103387 kubelet[3190]: I0913 00:12:36.103014 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxmxk\" (UniqueName: \"kubernetes.io/projected/a864767b-7de4-4ec1-8721-7259f1395851-kube-api-access-sxmxk\") pod \"whisker-7c94df5fc9-qtgd9\" (UID: \"a864767b-7de4-4ec1-8721-7259f1395851\") " pod="calico-system/whisker-7c94df5fc9-qtgd9" Sep 13 00:12:36.106258 kubelet[3190]: I0913 00:12:36.103366 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a864767b-7de4-4ec1-8721-7259f1395851-whisker-ca-bundle\") pod \"whisker-7c94df5fc9-qtgd9\" (UID: \"a864767b-7de4-4ec1-8721-7259f1395851\") " pod="calico-system/whisker-7c94df5fc9-qtgd9" Sep 13 00:12:36.118038 systemd-logind[1958]: New session 8 of user core. Sep 13 00:12:36.123717 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:12:36.193245 systemd-networkd[1817]: caliee1db7c9ec3: Link UP Sep 13 00:12:36.196349 systemd-networkd[1817]: caliee1db7c9ec3: Gained carrier Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:35.924 [INFO][4921] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0 calico-apiserver-856f469597- calico-apiserver e424bb58-dcec-4399-856b-a30a0d9831b0 930 0 2025-09-13 00:12:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:856f469597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-45 calico-apiserver-856f469597-bn26z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee1db7c9ec3 [] [] }} ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:35.924 [INFO][4921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.085 [INFO][4939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" HandleID="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.086 [INFO][4939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" HandleID="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005fd180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-45", "pod":"calico-apiserver-856f469597-bn26z", "timestamp":"2025-09-13 00:12:36.085476146 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.086 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.086 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.086 [INFO][4939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.122 [INFO][4939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.144 [INFO][4939] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.150 [INFO][4939] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.153 [INFO][4939] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.156 [INFO][4939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.156 [INFO][4939] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.160 [INFO][4939] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081 Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.168 [INFO][4939] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.176 [INFO][4939] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.1/26] block=192.168.1.0/26 handle="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.176 [INFO][4939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.1/26] handle="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" host="ip-172-31-31-45" Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.176 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:36.274176 containerd[1973]: 2025-09-13 00:12:36.176 [INFO][4939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.1/26] IPv6=[] ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" HandleID="k8s-pod-network.bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.280850 containerd[1973]: 2025-09-13 00:12:36.184 [INFO][4921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"e424bb58-dcec-4399-856b-a30a0d9831b0", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"calico-apiserver-856f469597-bn26z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1db7c9ec3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:36.280850 containerd[1973]: 2025-09-13 00:12:36.185 [INFO][4921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.1/32] ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.280850 containerd[1973]: 2025-09-13 00:12:36.185 [INFO][4921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee1db7c9ec3 ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.280850 containerd[1973]: 2025-09-13 00:12:36.197 [INFO][4921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.280850 containerd[1973]: 2025-09-13 00:12:36.198 [INFO][4921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"e424bb58-dcec-4399-856b-a30a0d9831b0", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081", Pod:"calico-apiserver-856f469597-bn26z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1db7c9ec3", MAC:"d2:97:38:e0:6c:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:36.280850 containerd[1973]: 2025-09-13 00:12:36.237 [INFO][4921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-bn26z" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:36.357839 containerd[1973]: time="2025-09-13T00:12:36.357582234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c94df5fc9-qtgd9,Uid:a864767b-7de4-4ec1-8721-7259f1395851,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:36.373950 systemd-networkd[1817]: cali2764d338f41: Link UP Sep 13 00:12:36.388824 systemd-networkd[1817]: cali2764d338f41: Gained carrier Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:35.933 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0 calico-apiserver-856f469597- calico-apiserver 39751ac3-0924-426f-b77b-8d83a90311a5 932 0 2025-09-13 00:12:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:856f469597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-45 calico-apiserver-856f469597-zpqrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2764d338f41 [] [] }} ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:35.933 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.110 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" HandleID="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.111 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" HandleID="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035a9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-45", "pod":"calico-apiserver-856f469597-zpqrf", "timestamp":"2025-09-13 00:12:36.098247661 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.113 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.176 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.176 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.228 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.276 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.301 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.306 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.312 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.312 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.317 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820 Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.325 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.341 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.2/26] block=192.168.1.0/26 handle="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.341 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.2/26] handle="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" host="ip-172-31-31-45" Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.341 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:36.416943 containerd[1973]: 2025-09-13 00:12:36.341 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.2/26] IPv6=[] ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" HandleID="k8s-pod-network.880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.418298 containerd[1973]: 2025-09-13 00:12:36.357 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"39751ac3-0924-426f-b77b-8d83a90311a5", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"calico-apiserver-856f469597-zpqrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2764d338f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:36.418298 containerd[1973]: 2025-09-13 00:12:36.357 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.2/32] ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.418298 containerd[1973]: 2025-09-13 00:12:36.359 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2764d338f41 ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.418298 containerd[1973]: 2025-09-13 00:12:36.388 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.418298 containerd[1973]: 2025-09-13 00:12:36.390 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"39751ac3-0924-426f-b77b-8d83a90311a5", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820", Pod:"calico-apiserver-856f469597-zpqrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2764d338f41", MAC:"5a:42:a0:76:25:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:36.418298 containerd[1973]: 2025-09-13 00:12:36.411 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820" Namespace="calico-apiserver" Pod="calico-apiserver-856f469597-zpqrf" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.164 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.164 [INFO][4959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" iface="eth0" netns="/var/run/netns/cni-91a68704-779a-e748-8709-44be25ff615b" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.164 [INFO][4959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" iface="eth0" netns="/var/run/netns/cni-91a68704-779a-e748-8709-44be25ff615b" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.164 [INFO][4959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" iface="eth0" netns="/var/run/netns/cni-91a68704-779a-e748-8709-44be25ff615b" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.164 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.164 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.274 [INFO][4970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.277 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.341 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.363 [WARNING][4970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.363 [INFO][4970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.369 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:36.438792 containerd[1973]: 2025-09-13 00:12:36.404 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:36.439400 containerd[1973]: time="2025-09-13T00:12:36.439052517Z" level=info msg="TearDown network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" successfully" Sep 13 00:12:36.439400 containerd[1973]: time="2025-09-13T00:12:36.439086975Z" level=info msg="StopPodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" returns successfully" Sep 13 00:12:36.441290 containerd[1973]: time="2025-09-13T00:12:36.440211696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5448c99594-x6ss9,Uid:b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd,Namespace:calico-system,Attempt:1,}" Sep 13 00:12:36.450776 containerd[1973]: time="2025-09-13T00:12:36.449866771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:36.450776 containerd[1973]: time="2025-09-13T00:12:36.449937032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:36.450776 containerd[1973]: time="2025-09-13T00:12:36.449961352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:36.461854 containerd[1973]: time="2025-09-13T00:12:36.461121117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:36.570023 systemd[1]: Started cri-containerd-bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081.scope - libcontainer container bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081. Sep 13 00:12:36.603583 containerd[1973]: time="2025-09-13T00:12:36.591665837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:36.603583 containerd[1973]: time="2025-09-13T00:12:36.591731290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:36.603583 containerd[1973]: time="2025-09-13T00:12:36.591763973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:36.603583 containerd[1973]: time="2025-09-13T00:12:36.591925275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:36.684210 systemd[1]: run-netns-cni\x2d91a68704\x2d779a\x2de748\x2d8709\x2d44be25ff615b.mount: Deactivated successfully. Sep 13 00:12:36.731151 systemd[1]: run-containerd-runc-k8s.io-880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820-runc.tEfSXC.mount: Deactivated successfully. Sep 13 00:12:36.748662 systemd[1]: Started cri-containerd-880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820.scope - libcontainer container 880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820. Sep 13 00:12:36.896730 systemd-networkd[1817]: cali86041604332: Link UP Sep 13 00:12:36.903538 systemd-networkd[1817]: cali86041604332: Gained carrier Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.555 [INFO][5002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0 whisker-7c94df5fc9- calico-system a864767b-7de4-4ec1-8721-7259f1395851 976 0 2025-09-13 00:12:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c94df5fc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-45 whisker-7c94df5fc9-qtgd9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali86041604332 [] [] }} ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.555 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.740 [INFO][5076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" HandleID="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Workload="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.740 [INFO][5076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" HandleID="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Workload="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-45", "pod":"whisker-7c94df5fc9-qtgd9", "timestamp":"2025-09-13 00:12:36.740223972 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.740 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.740 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.741 [INFO][5076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.763 [INFO][5076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.781 [INFO][5076] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.799 [INFO][5076] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.818 [INFO][5076] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.825 [INFO][5076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.825 [INFO][5076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.830 [INFO][5076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4 Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.845 [INFO][5076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.859 [INFO][5076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.3/26] block=192.168.1.0/26 handle="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.859 [INFO][5076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.3/26] handle="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" host="ip-172-31-31-45" Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.859 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:36.960856 containerd[1973]: 2025-09-13 00:12:36.859 [INFO][5076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.3/26] IPv6=[] ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" HandleID="k8s-pod-network.caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Workload="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.967385 containerd[1973]: 2025-09-13 00:12:36.872 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0", GenerateName:"whisker-7c94df5fc9-", Namespace:"calico-system", SelfLink:"", UID:"a864767b-7de4-4ec1-8721-7259f1395851", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c94df5fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"whisker-7c94df5fc9-qtgd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali86041604332", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:36.967385 containerd[1973]: 2025-09-13 00:12:36.880 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.3/32] ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.967385 containerd[1973]: 2025-09-13 00:12:36.883 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86041604332 ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.967385 containerd[1973]: 2025-09-13 00:12:36.903 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.967385 containerd[1973]: 2025-09-13 00:12:36.908 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0", GenerateName:"whisker-7c94df5fc9-", Namespace:"calico-system", SelfLink:"", UID:"a864767b-7de4-4ec1-8721-7259f1395851", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c94df5fc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4", Pod:"whisker-7c94df5fc9-qtgd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali86041604332", MAC:"7e:60:ac:64:6c:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:36.967385 containerd[1973]: 2025-09-13 00:12:36.944 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4" Namespace="calico-system" Pod="whisker-7c94df5fc9-qtgd9" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--7c94df5fc9--qtgd9-eth0" Sep 13 00:12:36.986146 containerd[1973]: time="2025-09-13T00:12:36.985239397Z" level=info msg="StopPodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\"" Sep 13 00:12:36.992156 containerd[1973]: time="2025-09-13T00:12:36.991383395Z" level=info msg="StopPodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\"" Sep 13 00:12:36.996206 containerd[1973]: time="2025-09-13T00:12:36.995767544Z" level=info msg="StopPodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\"" Sep 13 00:12:37.076068 systemd-networkd[1817]: cali7da32fcfe75: Link UP Sep 13 00:12:37.076343 systemd-networkd[1817]: cali7da32fcfe75: Gained carrier Sep 13 00:12:37.103129 containerd[1973]: time="2025-09-13T00:12:37.103037036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-zpqrf,Uid:39751ac3-0924-426f-b77b-8d83a90311a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820\"" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.749 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0 calico-kube-controllers-5448c99594- calico-system b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd 979 0 2025-09-13 00:12:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5448c99594 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-45 calico-kube-controllers-5448c99594-x6ss9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7da32fcfe75 [] [] }} ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.751 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.838 [INFO][5101] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" HandleID="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.839 [INFO][5101] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" HandleID="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003760d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-45", "pod":"calico-kube-controllers-5448c99594-x6ss9", "timestamp":"2025-09-13 00:12:36.838661554 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.839 [INFO][5101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.860 [INFO][5101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.860 [INFO][5101] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.880 [INFO][5101] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.909 [INFO][5101] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.955 [INFO][5101] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.968 [INFO][5101] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.992 [INFO][5101] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:36.994 [INFO][5101] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:37.008 [INFO][5101] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7 Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:37.035 [INFO][5101] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:37.065 [INFO][5101] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.4/26] block=192.168.1.0/26 handle="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:37.065 [INFO][5101] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.4/26] handle="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" host="ip-172-31-31-45" Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:37.065 [INFO][5101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:37.189459 containerd[1973]: 2025-09-13 00:12:37.065 [INFO][5101] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.4/26] IPv6=[] ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" HandleID="k8s-pod-network.18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.191305 containerd[1973]: 2025-09-13 00:12:37.071 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0", GenerateName:"calico-kube-controllers-5448c99594-", Namespace:"calico-system", SelfLink:"", UID:"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5448c99594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"calico-kube-controllers-5448c99594-x6ss9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da32fcfe75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:37.191305 containerd[1973]: 2025-09-13 00:12:37.071 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.4/32] ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.191305 containerd[1973]: 2025-09-13 00:12:37.071 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7da32fcfe75 ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.191305 containerd[1973]: 2025-09-13 00:12:37.075 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.191305 containerd[1973]: 2025-09-13 00:12:37.075 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0", GenerateName:"calico-kube-controllers-5448c99594-", Namespace:"calico-system", SelfLink:"", UID:"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5448c99594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7", Pod:"calico-kube-controllers-5448c99594-x6ss9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da32fcfe75", MAC:"26:8d:12:ec:15:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:37.191305 containerd[1973]: 2025-09-13 00:12:37.112 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7" Namespace="calico-system" Pod="calico-kube-controllers-5448c99594-x6ss9" WorkloadEndpoint="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:37.200955 containerd[1973]: time="2025-09-13T00:12:37.190273994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:37.200955 containerd[1973]: time="2025-09-13T00:12:37.194172023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:37.200955 containerd[1973]: time="2025-09-13T00:12:37.194190430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:37.200955 containerd[1973]: time="2025-09-13T00:12:37.194318133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:37.226005 containerd[1973]: time="2025-09-13T00:12:37.221824393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-856f469597-bn26z,Uid:e424bb58-dcec-4399-856b-a30a0d9831b0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081\"" Sep 13 00:12:37.290871 containerd[1973]: time="2025-09-13T00:12:37.290600774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:12:37.332328 systemd[1]: Started cri-containerd-caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4.scope - libcontainer container caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4. Sep 13 00:12:37.455661 containerd[1973]: time="2025-09-13T00:12:37.455127340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:37.462360 containerd[1973]: time="2025-09-13T00:12:37.460682009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:37.464653 containerd[1973]: time="2025-09-13T00:12:37.463231686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:37.469672 sshd[4912]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:37.471990 containerd[1973]: time="2025-09-13T00:12:37.467136278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:37.475826 systemd-logind[1958]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:12:37.476891 systemd[1]: sshd@7-172.31.31.45:22-139.178.89.65:38120.service: Deactivated successfully. Sep 13 00:12:37.480095 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:12:37.482915 systemd-logind[1958]: Removed session 8. Sep 13 00:12:37.536656 systemd-networkd[1817]: caliee1db7c9ec3: Gained IPv6LL Sep 13 00:12:37.551850 systemd[1]: Started cri-containerd-18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7.scope - libcontainer container 18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7. Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.442 [INFO][5155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.448 [INFO][5155] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" iface="eth0" netns="/var/run/netns/cni-27a73350-5e8b-c035-ec0b-0bee8f3e8d91" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.449 [INFO][5155] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" iface="eth0" netns="/var/run/netns/cni-27a73350-5e8b-c035-ec0b-0bee8f3e8d91" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.449 [INFO][5155] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" iface="eth0" netns="/var/run/netns/cni-27a73350-5e8b-c035-ec0b-0bee8f3e8d91" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.449 [INFO][5155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.449 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.616 [INFO][5242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.618 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.618 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.633 [WARNING][5242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.634 [INFO][5242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.638 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:37.660724 containerd[1973]: 2025-09-13 00:12:37.651 [INFO][5155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:37.661911 containerd[1973]: time="2025-09-13T00:12:37.661562083Z" level=info msg="TearDown network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" successfully" Sep 13 00:12:37.664527 containerd[1973]: time="2025-09-13T00:12:37.661594419Z" level=info msg="StopPodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" returns successfully" Sep 13 00:12:37.670027 containerd[1973]: time="2025-09-13T00:12:37.669875767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pc25k,Uid:c13143f6-e5dc-44d2-a955-cb02eb84e81b,Namespace:kube-system,Attempt:1,}" Sep 13 00:12:37.670193 systemd[1]: run-netns-cni\x2d27a73350\x2d5e8b\x2dc035\x2dec0b\x2d0bee8f3e8d91.mount: Deactivated successfully. Sep 13 00:12:37.691835 containerd[1973]: time="2025-09-13T00:12:37.691615285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c94df5fc9-qtgd9,Uid:a864767b-7de4-4ec1-8721-7259f1395851,Namespace:calico-system,Attempt:0,} returns sandbox id \"caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4\"" Sep 13 00:12:37.700922 containerd[1973]: time="2025-09-13T00:12:37.700863784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5448c99594-x6ss9,Uid:b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd,Namespace:calico-system,Attempt:1,} returns sandbox id \"18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7\"" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.515 [INFO][5176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.520 [INFO][5176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" iface="eth0" netns="/var/run/netns/cni-88d68ec2-072d-d98e-7c4d-50f4b64df0f9" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.528 [INFO][5176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" iface="eth0" netns="/var/run/netns/cni-88d68ec2-072d-d98e-7c4d-50f4b64df0f9" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.533 [INFO][5176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" iface="eth0" netns="/var/run/netns/cni-88d68ec2-072d-d98e-7c4d-50f4b64df0f9" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.533 [INFO][5176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.535 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.722 [INFO][5262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.724 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.724 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.741 [WARNING][5262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.741 [INFO][5262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.745 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.749 [INFO][5176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.577 [INFO][5180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.577 [INFO][5180] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" iface="eth0" netns="/var/run/netns/cni-2fc979a3-a710-012f-5b3b-0dc37188c11f" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.577 [INFO][5180] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" iface="eth0" netns="/var/run/netns/cni-2fc979a3-a710-012f-5b3b-0dc37188c11f" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.578 [INFO][5180] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" iface="eth0" netns="/var/run/netns/cni-2fc979a3-a710-012f-5b3b-0dc37188c11f" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.578 [INFO][5180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.578 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.735 [INFO][5274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.736 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.745 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.757 [WARNING][5274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.757 [INFO][5274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.760 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:37.784591 containerd[1973]: 2025-09-13 00:12:37.763 [INFO][5180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:37.790082 containerd[1973]: time="2025-09-13T00:12:37.787551176Z" level=info msg="TearDown network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" successfully" Sep 13 00:12:37.790082 containerd[1973]: time="2025-09-13T00:12:37.787597000Z" level=info msg="StopPodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" returns successfully" Sep 13 00:12:37.790741 systemd[1]: run-netns-cni\x2d2fc979a3\x2da710\x2d012f\x2d5b3b\x2d0dc37188c11f.mount: Deactivated successfully. Sep 13 00:12:37.791072 systemd[1]: run-netns-cni\x2d88d68ec2\x2d072d\x2dd98e\x2d7c4d\x2d50f4b64df0f9.mount: Deactivated successfully. Sep 13 00:12:37.794133 containerd[1973]: time="2025-09-13T00:12:37.794094640Z" level=info msg="TearDown network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" successfully" Sep 13 00:12:37.794224 containerd[1973]: time="2025-09-13T00:12:37.794134696Z" level=info msg="StopPodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" returns successfully" Sep 13 00:12:37.795450 containerd[1973]: time="2025-09-13T00:12:37.795179940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6h8z,Uid:7bbf7a35-8d3f-406b-8fc5-674202715e39,Namespace:calico-system,Attempt:1,}" Sep 13 00:12:37.796743 containerd[1973]: time="2025-09-13T00:12:37.796698577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-l7k22,Uid:cc1327ab-9550-47ae-bd83-f41e2cea6ca2,Namespace:calico-system,Attempt:1,}" Sep 13 00:12:37.966700 containerd[1973]: time="2025-09-13T00:12:37.966658194Z" level=info msg="StopPodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\"" Sep 13 00:12:37.998746 systemd-networkd[1817]: cali64f0032fe16: Link UP Sep 13 00:12:38.000228 systemd-networkd[1817]: cali64f0032fe16: Gained carrier Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.806 [INFO][5294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0 coredns-674b8bbfcf- kube-system c13143f6-e5dc-44d2-a955-cb02eb84e81b 1006 0 2025-09-13 00:11:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-45 coredns-674b8bbfcf-pc25k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64f0032fe16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.806 [INFO][5294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.878 [INFO][5308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" HandleID="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.878 [INFO][5308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" HandleID="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-45", "pod":"coredns-674b8bbfcf-pc25k", "timestamp":"2025-09-13 00:12:37.878637763 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.879 [INFO][5308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.879 [INFO][5308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.879 [INFO][5308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.906 [INFO][5308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.917 [INFO][5308] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.925 [INFO][5308] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.929 [INFO][5308] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.934 [INFO][5308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.934 [INFO][5308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.936 [INFO][5308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946 Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.952 [INFO][5308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.968 [INFO][5308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.5/26] block=192.168.1.0/26 handle="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.969 [INFO][5308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.5/26] handle="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" host="ip-172-31-31-45" Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.969 [INFO][5308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:38.058823 containerd[1973]: 2025-09-13 00:12:37.969 [INFO][5308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.5/26] IPv6=[] ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" HandleID="k8s-pod-network.0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.059740 containerd[1973]: 2025-09-13 00:12:37.982 [INFO][5294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c13143f6-e5dc-44d2-a955-cb02eb84e81b", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"coredns-674b8bbfcf-pc25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64f0032fe16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.059740 containerd[1973]: 2025-09-13 00:12:37.982 [INFO][5294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.5/32] ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.059740 containerd[1973]: 2025-09-13 00:12:37.982 [INFO][5294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64f0032fe16 ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.059740 containerd[1973]: 2025-09-13 00:12:38.002 [INFO][5294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.059740 containerd[1973]: 2025-09-13 00:12:38.003 [INFO][5294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c13143f6-e5dc-44d2-a955-cb02eb84e81b", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946", Pod:"coredns-674b8bbfcf-pc25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64f0032fe16", MAC:"f2:15:31:dd:a1:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.059740 containerd[1973]: 2025-09-13 00:12:38.051 [INFO][5294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946" Namespace="kube-system" Pod="coredns-674b8bbfcf-pc25k" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:38.104140 systemd-networkd[1817]: cali0437f99b393: Link UP Sep 13 00:12:38.112746 systemd-networkd[1817]: cali0437f99b393: Gained carrier Sep 13 00:12:38.153842 containerd[1973]: time="2025-09-13T00:12:38.152773636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:38.153842 containerd[1973]: time="2025-09-13T00:12:38.152871734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:38.153842 containerd[1973]: time="2025-09-13T00:12:38.152895470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.153842 containerd[1973]: time="2025-09-13T00:12:38.153017210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.896 [INFO][5318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0 goldmane-54d579b49d- calico-system cc1327ab-9550-47ae-bd83-f41e2cea6ca2 1007 0 2025-09-13 00:12:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-45 goldmane-54d579b49d-l7k22 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0437f99b393 [] [] }} ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.897 [INFO][5318] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.951 [INFO][5339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" HandleID="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.952 [INFO][5339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" HandleID="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-45", "pod":"goldmane-54d579b49d-l7k22", "timestamp":"2025-09-13 00:12:37.951244942 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.952 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.969 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:37.970 [INFO][5339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.011 [INFO][5339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.034 [INFO][5339] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.045 [INFO][5339] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.052 [INFO][5339] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.059 [INFO][5339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.059 [INFO][5339] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.063 [INFO][5339] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0 Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.077 [INFO][5339] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.093 [INFO][5339] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.6/26] block=192.168.1.0/26 handle="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.093 [INFO][5339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.6/26] handle="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" host="ip-172-31-31-45" Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.093 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:38.162091 containerd[1973]: 2025-09-13 00:12:38.094 [INFO][5339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.6/26] IPv6=[] ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" HandleID="k8s-pod-network.858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.163310 containerd[1973]: 2025-09-13 00:12:38.100 [INFO][5318] cni-plugin/k8s.go 418: Populated endpoint ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc1327ab-9550-47ae-bd83-f41e2cea6ca2", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"goldmane-54d579b49d-l7k22", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0437f99b393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.163310 containerd[1973]: 2025-09-13 00:12:38.100 [INFO][5318] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.6/32] ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.163310 containerd[1973]: 2025-09-13 00:12:38.100 [INFO][5318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0437f99b393 ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.163310 containerd[1973]: 2025-09-13 00:12:38.117 [INFO][5318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.163310 containerd[1973]: 2025-09-13 00:12:38.122 [INFO][5318] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc1327ab-9550-47ae-bd83-f41e2cea6ca2", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0", Pod:"goldmane-54d579b49d-l7k22", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0437f99b393", MAC:"9a:ec:9a:a7:e5:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.163310 containerd[1973]: 2025-09-13 00:12:38.150 [INFO][5318] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0" Namespace="calico-system" Pod="goldmane-54d579b49d-l7k22" WorkloadEndpoint="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:38.205997 systemd[1]: Started cri-containerd-0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946.scope - libcontainer container 0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946. Sep 13 00:12:38.250182 containerd[1973]: time="2025-09-13T00:12:38.250043035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:38.250182 containerd[1973]: time="2025-09-13T00:12:38.250122928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:38.250182 containerd[1973]: time="2025-09-13T00:12:38.250151862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.253346 containerd[1973]: time="2025-09-13T00:12:38.252865102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.265821 systemd-networkd[1817]: calic02c8bdc50b: Link UP Sep 13 00:12:38.271887 systemd-networkd[1817]: calic02c8bdc50b: Gained carrier Sep 13 00:12:38.307176 systemd-networkd[1817]: cali2764d338f41: Gained IPv6LL Sep 13 00:12:38.307702 systemd-networkd[1817]: cali7da32fcfe75: Gained IPv6LL Sep 13 00:12:38.314648 systemd[1]: Started cri-containerd-858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0.scope - libcontainer container 858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0. Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.114 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.115 [INFO][5362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" iface="eth0" netns="/var/run/netns/cni-9a6e2abd-e43d-989c-e5dd-081cb5f26cc5" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.116 [INFO][5362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" iface="eth0" netns="/var/run/netns/cni-9a6e2abd-e43d-989c-e5dd-081cb5f26cc5" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.121 [INFO][5362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" iface="eth0" netns="/var/run/netns/cni-9a6e2abd-e43d-989c-e5dd-081cb5f26cc5" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.121 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.121 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.257 [INFO][5385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.261 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.261 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.293 [WARNING][5385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.293 [INFO][5385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.297 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:38.332300 containerd[1973]: 2025-09-13 00:12:38.319 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:38.333227 containerd[1973]: time="2025-09-13T00:12:38.332718679Z" level=info msg="TearDown network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" successfully" Sep 13 00:12:38.333227 containerd[1973]: time="2025-09-13T00:12:38.332759150Z" level=info msg="StopPodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" returns successfully" Sep 13 00:12:38.334520 containerd[1973]: time="2025-09-13T00:12:38.333823902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96m5z,Uid:a3bb9403-ed1c-46f5-b8ac-f060a9d357fb,Namespace:kube-system,Attempt:1,}" Sep 13 00:12:38.335790 containerd[1973]: time="2025-09-13T00:12:38.335754332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pc25k,Uid:c13143f6-e5dc-44d2-a955-cb02eb84e81b,Namespace:kube-system,Attempt:1,} returns sandbox id \"0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946\"" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:37.920 [INFO][5312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0 csi-node-driver- calico-system 7bbf7a35-8d3f-406b-8fc5-674202715e39 1008 0 2025-09-13 00:12:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-45 csi-node-driver-r6h8z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic02c8bdc50b [] [] }} ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:37.920 [INFO][5312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.039 [INFO][5344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" HandleID="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.040 [INFO][5344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" HandleID="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-45", "pod":"csi-node-driver-r6h8z", "timestamp":"2025-09-13 00:12:38.037308286 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.040 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.095 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.096 [INFO][5344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.125 [INFO][5344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.142 [INFO][5344] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.164 [INFO][5344] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.170 [INFO][5344] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.178 [INFO][5344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.178 [INFO][5344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.190 [INFO][5344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.213 [INFO][5344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.243 [INFO][5344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.7/26] block=192.168.1.0/26 handle="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.243 [INFO][5344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.7/26] handle="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" host="ip-172-31-31-45" Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.243 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:38.359696 containerd[1973]: 2025-09-13 00:12:38.243 [INFO][5344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.7/26] IPv6=[] ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" HandleID="k8s-pod-network.6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.362255 containerd[1973]: 2025-09-13 00:12:38.252 [INFO][5312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bbf7a35-8d3f-406b-8fc5-674202715e39", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"csi-node-driver-r6h8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic02c8bdc50b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.362255 containerd[1973]: 2025-09-13 00:12:38.253 [INFO][5312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.7/32] ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.362255 containerd[1973]: 2025-09-13 00:12:38.253 [INFO][5312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic02c8bdc50b ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.362255 containerd[1973]: 2025-09-13 00:12:38.277 [INFO][5312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.362255 containerd[1973]: 2025-09-13 00:12:38.282 [INFO][5312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bbf7a35-8d3f-406b-8fc5-674202715e39", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a", Pod:"csi-node-driver-r6h8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic02c8bdc50b", MAC:"9e:1c:dc:23:3e:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.362255 containerd[1973]: 2025-09-13 00:12:38.323 [INFO][5312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a" Namespace="calico-system" Pod="csi-node-driver-r6h8z" WorkloadEndpoint="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:38.372971 containerd[1973]: time="2025-09-13T00:12:38.371587951Z" level=info msg="CreateContainer within sandbox \"0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:12:38.439182 containerd[1973]: time="2025-09-13T00:12:38.438804576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:38.439182 containerd[1973]: time="2025-09-13T00:12:38.438882710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:38.439182 containerd[1973]: time="2025-09-13T00:12:38.438906129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.441599 containerd[1973]: time="2025-09-13T00:12:38.441292572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.488791 systemd[1]: Started cri-containerd-6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a.scope - libcontainer container 6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a. Sep 13 00:12:38.552978 containerd[1973]: time="2025-09-13T00:12:38.552934751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-l7k22,Uid:cc1327ab-9550-47ae-bd83-f41e2cea6ca2,Namespace:calico-system,Attempt:1,} returns sandbox id \"858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0\"" Sep 13 00:12:38.560655 systemd-networkd[1817]: cali86041604332: Gained IPv6LL Sep 13 00:12:38.567301 containerd[1973]: time="2025-09-13T00:12:38.567154843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6h8z,Uid:7bbf7a35-8d3f-406b-8fc5-674202715e39,Namespace:calico-system,Attempt:1,} returns sandbox id \"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a\"" Sep 13 00:12:38.579723 containerd[1973]: time="2025-09-13T00:12:38.579589530Z" level=info msg="CreateContainer within sandbox \"0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76482e05b36bdf11ec09bb92a8b093176abcacf8fda8a1ee13ce33b8c9bec7d6\"" Sep 13 00:12:38.583525 containerd[1973]: time="2025-09-13T00:12:38.581359687Z" level=info msg="StartContainer for \"76482e05b36bdf11ec09bb92a8b093176abcacf8fda8a1ee13ce33b8c9bec7d6\"" Sep 13 00:12:38.636686 systemd[1]: Started cri-containerd-76482e05b36bdf11ec09bb92a8b093176abcacf8fda8a1ee13ce33b8c9bec7d6.scope - libcontainer container 76482e05b36bdf11ec09bb92a8b093176abcacf8fda8a1ee13ce33b8c9bec7d6. Sep 13 00:12:38.679874 systemd[1]: run-netns-cni\x2d9a6e2abd\x2de43d\x2d989c\x2de5dd\x2d081cb5f26cc5.mount: Deactivated successfully. Sep 13 00:12:38.756312 systemd-networkd[1817]: cali68ca67d3d66: Link UP Sep 13 00:12:38.762045 systemd-networkd[1817]: cali68ca67d3d66: Gained carrier Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.567 [INFO][5481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0 coredns-674b8bbfcf- kube-system a3bb9403-ed1c-46f5-b8ac-f060a9d357fb 1018 0 2025-09-13 00:11:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-45 coredns-674b8bbfcf-96m5z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68ca67d3d66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.569 [INFO][5481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.623 [INFO][5541] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" HandleID="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.624 [INFO][5541] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" HandleID="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-45", "pod":"coredns-674b8bbfcf-96m5z", "timestamp":"2025-09-13 00:12:38.623684431 +0000 UTC"}, Hostname:"ip-172-31-31-45", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.624 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.624 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.624 [INFO][5541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-45' Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.638 [INFO][5541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.649 [INFO][5541] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.670 [INFO][5541] ipam/ipam.go 511: Trying affinity for 192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.678 [INFO][5541] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.698 [INFO][5541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.0/26 host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.698 [INFO][5541] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.1.0/26 handle="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.702 [INFO][5541] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.713 [INFO][5541] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.1.0/26 handle="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.736 [INFO][5541] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.1.8/26] block=192.168.1.0/26 handle="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.736 [INFO][5541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.8/26] handle="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" host="ip-172-31-31-45" Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.736 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:38.811456 containerd[1973]: 2025-09-13 00:12:38.736 [INFO][5541] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.1.8/26] IPv6=[] ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" HandleID="k8s-pod-network.b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.814781 containerd[1973]: 2025-09-13 00:12:38.748 [INFO][5481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"", Pod:"coredns-674b8bbfcf-96m5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68ca67d3d66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.814781 containerd[1973]: 2025-09-13 00:12:38.748 [INFO][5481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.8/32] ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.814781 containerd[1973]: 2025-09-13 00:12:38.748 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68ca67d3d66 ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.814781 containerd[1973]: 2025-09-13 00:12:38.762 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.814781 containerd[1973]: 2025-09-13 00:12:38.767 [INFO][5481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d", Pod:"coredns-674b8bbfcf-96m5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68ca67d3d66", MAC:"e2:c6:01:ee:0b:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:38.814781 containerd[1973]: 2025-09-13 00:12:38.804 [INFO][5481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d" Namespace="kube-system" Pod="coredns-674b8bbfcf-96m5z" WorkloadEndpoint="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:38.864722 containerd[1973]: time="2025-09-13T00:12:38.861288874Z" level=info msg="StartContainer for \"76482e05b36bdf11ec09bb92a8b093176abcacf8fda8a1ee13ce33b8c9bec7d6\" returns successfully" Sep 13 00:12:38.911009 containerd[1973]: time="2025-09-13T00:12:38.910419636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:38.911009 containerd[1973]: time="2025-09-13T00:12:38.910784336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:38.911494 containerd[1973]: time="2025-09-13T00:12:38.911291844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.911494 containerd[1973]: time="2025-09-13T00:12:38.911457653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:38.966278 systemd[1]: run-containerd-runc-k8s.io-b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d-runc.Lf85Ne.mount: Deactivated successfully. Sep 13 00:12:38.978720 systemd[1]: Started cri-containerd-b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d.scope - libcontainer container b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d. Sep 13 00:12:39.083521 containerd[1973]: time="2025-09-13T00:12:39.083336235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96m5z,Uid:a3bb9403-ed1c-46f5-b8ac-f060a9d357fb,Namespace:kube-system,Attempt:1,} returns sandbox id \"b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d\"" Sep 13 00:12:39.090489 containerd[1973]: time="2025-09-13T00:12:39.090388729Z" level=info msg="CreateContainer within sandbox \"b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:12:39.113406 containerd[1973]: time="2025-09-13T00:12:39.113243425Z" level=info msg="CreateContainer within sandbox \"b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90f8c79fd8e30cc3b1861eaa78d7b1b3acc9459073f8bc2f60c4881daf4489d4\"" Sep 13 00:12:39.116526 containerd[1973]: time="2025-09-13T00:12:39.115141410Z" level=info msg="StartContainer for \"90f8c79fd8e30cc3b1861eaa78d7b1b3acc9459073f8bc2f60c4881daf4489d4\"" Sep 13 00:12:39.161684 systemd[1]: Started cri-containerd-90f8c79fd8e30cc3b1861eaa78d7b1b3acc9459073f8bc2f60c4881daf4489d4.scope - libcontainer container 90f8c79fd8e30cc3b1861eaa78d7b1b3acc9459073f8bc2f60c4881daf4489d4. Sep 13 00:12:39.207883 containerd[1973]: time="2025-09-13T00:12:39.207672954Z" level=info msg="StartContainer for \"90f8c79fd8e30cc3b1861eaa78d7b1b3acc9459073f8bc2f60c4881daf4489d4\" returns successfully" Sep 13 00:12:39.393209 systemd-networkd[1817]: cali64f0032fe16: Gained IPv6LL Sep 13 00:12:39.469475 kubelet[3190]: I0913 00:12:39.467889 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-96m5z" podStartSLOduration=42.467863578 podStartE2EDuration="42.467863578s" podCreationTimestamp="2025-09-13 00:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:39.457052953 +0000 UTC m=+47.697785494" watchObservedRunningTime="2025-09-13 00:12:39.467863578 +0000 UTC m=+47.708596119" Sep 13 00:12:39.489914 kubelet[3190]: I0913 00:12:39.488716 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pc25k" podStartSLOduration=42.488675532 podStartE2EDuration="42.488675532s" podCreationTimestamp="2025-09-13 00:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:39.486279992 +0000 UTC m=+47.727012532" watchObservedRunningTime="2025-09-13 00:12:39.488675532 +0000 UTC m=+47.729408075" Sep 13 00:12:39.776802 systemd-networkd[1817]: cali0437f99b393: Gained IPv6LL Sep 13 00:12:39.777093 systemd-networkd[1817]: calic02c8bdc50b: Gained IPv6LL Sep 13 00:12:40.033143 systemd-networkd[1817]: cali68ca67d3d66: Gained IPv6LL Sep 13 00:12:41.370623 containerd[1973]: time="2025-09-13T00:12:41.370562590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:41.372737 containerd[1973]: time="2025-09-13T00:12:41.372563700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:12:41.375592 containerd[1973]: time="2025-09-13T00:12:41.375314219Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:41.378836 containerd[1973]: time="2025-09-13T00:12:41.378799045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:41.380070 containerd[1973]: time="2025-09-13T00:12:41.379534153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.088882294s" Sep 13 00:12:41.380070 containerd[1973]: time="2025-09-13T00:12:41.379569569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:12:41.381183 containerd[1973]: time="2025-09-13T00:12:41.381145535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:12:41.391997 containerd[1973]: time="2025-09-13T00:12:41.391955987Z" level=info msg="CreateContainer within sandbox \"880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:12:41.416005 containerd[1973]: time="2025-09-13T00:12:41.415958985Z" level=info msg="CreateContainer within sandbox \"880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5bc793ed90183ea52eaec5542aa702fc7f5d381469efcfdfa6a998e21dae4b01\"" Sep 13 00:12:41.417252 containerd[1973]: time="2025-09-13T00:12:41.417130786Z" level=info msg="StartContainer for \"5bc793ed90183ea52eaec5542aa702fc7f5d381469efcfdfa6a998e21dae4b01\"" Sep 13 00:12:41.534357 systemd[1]: Started cri-containerd-5bc793ed90183ea52eaec5542aa702fc7f5d381469efcfdfa6a998e21dae4b01.scope - libcontainer container 5bc793ed90183ea52eaec5542aa702fc7f5d381469efcfdfa6a998e21dae4b01. Sep 13 00:12:41.607823 containerd[1973]: time="2025-09-13T00:12:41.607770515Z" level=info msg="StartContainer for \"5bc793ed90183ea52eaec5542aa702fc7f5d381469efcfdfa6a998e21dae4b01\" returns successfully" Sep 13 00:12:41.724654 containerd[1973]: time="2025-09-13T00:12:41.724518970Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:41.731001 containerd[1973]: time="2025-09-13T00:12:41.730667078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:12:41.736209 containerd[1973]: time="2025-09-13T00:12:41.736159820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 354.967958ms" Sep 13 00:12:41.736405 containerd[1973]: time="2025-09-13T00:12:41.736384027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:12:41.739391 containerd[1973]: time="2025-09-13T00:12:41.738724248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:12:41.744003 containerd[1973]: time="2025-09-13T00:12:41.743966081Z" level=info msg="CreateContainer within sandbox \"bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:12:41.781617 containerd[1973]: time="2025-09-13T00:12:41.781298537Z" level=info msg="CreateContainer within sandbox \"bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"385aabaa46e20d9888c638c2c289985bd6f6e7330a5a49b8816f2791fa121f7a\"" Sep 13 00:12:41.782563 containerd[1973]: time="2025-09-13T00:12:41.782533147Z" level=info msg="StartContainer for \"385aabaa46e20d9888c638c2c289985bd6f6e7330a5a49b8816f2791fa121f7a\"" Sep 13 00:12:41.835680 systemd[1]: Started cri-containerd-385aabaa46e20d9888c638c2c289985bd6f6e7330a5a49b8816f2791fa121f7a.scope - libcontainer container 385aabaa46e20d9888c638c2c289985bd6f6e7330a5a49b8816f2791fa121f7a. Sep 13 00:12:41.978320 containerd[1973]: time="2025-09-13T00:12:41.978188762Z" level=info msg="StartContainer for \"385aabaa46e20d9888c638c2c289985bd6f6e7330a5a49b8816f2791fa121f7a\" returns successfully" Sep 13 00:12:42.517013 systemd[1]: Started sshd@8-172.31.31.45:22-139.178.89.65:37076.service - OpenSSH per-connection server daemon (139.178.89.65:37076). Sep 13 00:12:42.585631 kubelet[3190]: I0913 00:12:42.583002 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-856f469597-zpqrf" podStartSLOduration=31.416568447 podStartE2EDuration="35.582967585s" podCreationTimestamp="2025-09-13 00:12:07 +0000 UTC" firstStartedPulling="2025-09-13 00:12:37.214563251 +0000 UTC m=+45.455295787" lastFinishedPulling="2025-09-13 00:12:41.380962393 +0000 UTC m=+49.621694925" observedRunningTime="2025-09-13 00:12:42.575302401 +0000 UTC m=+50.816034941" watchObservedRunningTime="2025-09-13 00:12:42.582967585 +0000 UTC m=+50.823700126" Sep 13 00:12:42.814518 sshd[5771]: Accepted publickey for core from 139.178.89.65 port 37076 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:12:42.820131 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:42.827372 ntpd[1951]: Listen normally on 7 vxlan.calico 192.168.1.0:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 7 vxlan.calico 192.168.1.0:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 8 vxlan.calico [fe80::6449:36ff:fef7:e47a%4]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 9 caliee1db7c9ec3 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 10 cali2764d338f41 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 11 cali86041604332 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 12 cali7da32fcfe75 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 13 cali64f0032fe16 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 14 cali0437f99b393 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 15 calic02c8bdc50b [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:12:42.829368 ntpd[1951]: 13 Sep 00:12:42 ntpd[1951]: Listen normally on 16 cali68ca67d3d66 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:12:42.827489 ntpd[1951]: Listen normally on 8 vxlan.calico [fe80::6449:36ff:fef7:e47a%4]:123 Sep 13 00:12:42.827546 ntpd[1951]: Listen normally on 9 caliee1db7c9ec3 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 13 00:12:42.827584 ntpd[1951]: Listen normally on 10 cali2764d338f41 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 13 00:12:42.827622 ntpd[1951]: Listen normally on 11 cali86041604332 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 13 00:12:42.827659 ntpd[1951]: Listen normally on 12 cali7da32fcfe75 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:12:42.827696 ntpd[1951]: Listen normally on 13 cali64f0032fe16 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:12:42.827730 ntpd[1951]: Listen normally on 14 cali0437f99b393 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:12:42.827766 ntpd[1951]: Listen normally on 15 calic02c8bdc50b [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:12:42.827805 ntpd[1951]: Listen normally on 16 cali68ca67d3d66 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:12:42.839395 systemd-logind[1958]: New session 9 of user core. Sep 13 00:12:42.842683 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:12:43.557087 kubelet[3190]: I0913 00:12:43.556579 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:44.039291 containerd[1973]: time="2025-09-13T00:12:44.038204658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:44.040967 containerd[1973]: time="2025-09-13T00:12:44.040915863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:12:44.045321 containerd[1973]: time="2025-09-13T00:12:44.043562409Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:44.059590 containerd[1973]: time="2025-09-13T00:12:44.059530585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:44.062987 containerd[1973]: time="2025-09-13T00:12:44.062607400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.323835231s" Sep 13 00:12:44.109886 containerd[1973]: time="2025-09-13T00:12:44.062656090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:12:44.115762 containerd[1973]: time="2025-09-13T00:12:44.115721615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:12:44.123492 containerd[1973]: time="2025-09-13T00:12:44.122869074Z" level=info msg="CreateContainer within sandbox \"caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:12:44.143060 sshd[5771]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:44.168595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437948145.mount: Deactivated successfully. Sep 13 00:12:44.170032 systemd[1]: sshd@8-172.31.31.45:22-139.178.89.65:37076.service: Deactivated successfully. Sep 13 00:12:44.179317 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:12:44.185094 systemd-logind[1958]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:12:44.188041 systemd-logind[1958]: Removed session 9. Sep 13 00:12:44.190056 containerd[1973]: time="2025-09-13T00:12:44.189887131Z" level=info msg="CreateContainer within sandbox \"caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f21be59fcaa7847ec26ff12cdbd40a0fcdf848f16f5bb223ea3b984c6d3d03df\"" Sep 13 00:12:44.192924 containerd[1973]: time="2025-09-13T00:12:44.192889708Z" level=info msg="StartContainer for \"f21be59fcaa7847ec26ff12cdbd40a0fcdf848f16f5bb223ea3b984c6d3d03df\"" Sep 13 00:12:44.308931 systemd[1]: run-containerd-runc-k8s.io-f21be59fcaa7847ec26ff12cdbd40a0fcdf848f16f5bb223ea3b984c6d3d03df-runc.mutN8N.mount: Deactivated successfully. Sep 13 00:12:44.320692 systemd[1]: Started cri-containerd-f21be59fcaa7847ec26ff12cdbd40a0fcdf848f16f5bb223ea3b984c6d3d03df.scope - libcontainer container f21be59fcaa7847ec26ff12cdbd40a0fcdf848f16f5bb223ea3b984c6d3d03df. Sep 13 00:12:44.420096 containerd[1973]: time="2025-09-13T00:12:44.420047316Z" level=info msg="StartContainer for \"f21be59fcaa7847ec26ff12cdbd40a0fcdf848f16f5bb223ea3b984c6d3d03df\" returns successfully" Sep 13 00:12:45.526442 kubelet[3190]: I0913 00:12:45.526319 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-856f469597-bn26z" podStartSLOduration=34.019743305 podStartE2EDuration="38.52629609s" podCreationTimestamp="2025-09-13 00:12:07 +0000 UTC" firstStartedPulling="2025-09-13 00:12:37.231384824 +0000 UTC m=+45.472117355" lastFinishedPulling="2025-09-13 00:12:41.737937598 +0000 UTC m=+49.978670140" observedRunningTime="2025-09-13 00:12:42.611784409 +0000 UTC m=+50.852516952" watchObservedRunningTime="2025-09-13 00:12:45.52629609 +0000 UTC m=+53.767028630" Sep 13 00:12:49.016064 containerd[1973]: time="2025-09-13T00:12:49.016003429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:49.019046 containerd[1973]: time="2025-09-13T00:12:49.018918436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:12:49.066996 containerd[1973]: time="2025-09-13T00:12:49.066953130Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:49.085701 containerd[1973]: time="2025-09-13T00:12:49.085624156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:49.089037 containerd[1973]: time="2025-09-13T00:12:49.088843121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.973071263s" Sep 13 00:12:49.089037 containerd[1973]: time="2025-09-13T00:12:49.088902830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:12:49.165855 containerd[1973]: time="2025-09-13T00:12:49.164665016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:12:49.200807 systemd[1]: Started sshd@9-172.31.31.45:22-139.178.89.65:37086.service - OpenSSH per-connection server daemon (139.178.89.65:37086). Sep 13 00:12:49.525221 containerd[1973]: time="2025-09-13T00:12:49.525185748Z" level=info msg="CreateContainer within sandbox \"18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:12:49.564744 sshd[5853]: Accepted publickey for core from 139.178.89.65 port 37086 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:12:49.570670 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:49.583631 systemd-logind[1958]: New session 10 of user core. Sep 13 00:12:49.588693 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:12:49.618567 containerd[1973]: time="2025-09-13T00:12:49.617318469Z" level=info msg="CreateContainer within sandbox \"18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c7628c4f133a11698f9283da522a1e4700a79f8062ad79632025b592bd0c18cd\"" Sep 13 00:12:49.630412 containerd[1973]: time="2025-09-13T00:12:49.628866251Z" level=info msg="StartContainer for \"c7628c4f133a11698f9283da522a1e4700a79f8062ad79632025b592bd0c18cd\"" Sep 13 00:12:49.844685 systemd[1]: Started cri-containerd-c7628c4f133a11698f9283da522a1e4700a79f8062ad79632025b592bd0c18cd.scope - libcontainer container c7628c4f133a11698f9283da522a1e4700a79f8062ad79632025b592bd0c18cd. Sep 13 00:12:49.979218 containerd[1973]: time="2025-09-13T00:12:49.978678316Z" level=info msg="StartContainer for \"c7628c4f133a11698f9283da522a1e4700a79f8062ad79632025b592bd0c18cd\" returns successfully" Sep 13 00:12:51.650485 kubelet[3190]: I0913 00:12:51.616531 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5448c99594-x6ss9" podStartSLOduration=28.138509785 podStartE2EDuration="39.567931018s" podCreationTimestamp="2025-09-13 00:12:12 +0000 UTC" firstStartedPulling="2025-09-13 00:12:37.714548465 +0000 UTC m=+45.955280983" lastFinishedPulling="2025-09-13 00:12:49.143969684 +0000 UTC m=+57.384702216" observedRunningTime="2025-09-13 00:12:51.452723579 +0000 UTC m=+59.693456121" watchObservedRunningTime="2025-09-13 00:12:51.567931018 +0000 UTC m=+59.808663561" Sep 13 00:12:51.720267 sshd[5853]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:51.726489 systemd[1]: sshd@9-172.31.31.45:22-139.178.89.65:37086.service: Deactivated successfully. Sep 13 00:12:51.727013 systemd-logind[1958]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:12:51.731775 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:12:51.741329 systemd-logind[1958]: Removed session 10. Sep 13 00:12:51.774852 systemd[1]: Started sshd@10-172.31.31.45:22-139.178.89.65:41318.service - OpenSSH per-connection server daemon (139.178.89.65:41318). Sep 13 00:12:51.998119 sshd[5941]: Accepted publickey for core from 139.178.89.65 port 41318 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:12:51.999659 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:52.021233 containerd[1973]: time="2025-09-13T00:12:52.021174008Z" level=info msg="StopPodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\"" Sep 13 00:12:52.024530 systemd-logind[1958]: New session 11 of user core. Sep 13 00:12:52.028071 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:12:54.074673 sshd[5941]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:54.138169 systemd[1]: sshd@10-172.31.31.45:22-139.178.89.65:41318.service: Deactivated successfully. Sep 13 00:12:54.146782 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:12:54.155953 systemd-logind[1958]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:12:54.167940 systemd[1]: Started sshd@11-172.31.31.45:22-139.178.89.65:41332.service - OpenSSH per-connection server daemon (139.178.89.65:41332). Sep 13 00:12:54.176492 systemd-logind[1958]: Removed session 11. Sep 13 00:12:54.461498 sshd[5982]: Accepted publickey for core from 139.178.89.65 port 41332 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:12:54.465621 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:54.474646 systemd-logind[1958]: New session 12 of user core. Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:53.566 [WARNING][5955] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bbf7a35-8d3f-406b-8fc5-674202715e39", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a", Pod:"csi-node-driver-r6h8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic02c8bdc50b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:53.576 [INFO][5955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:53.576 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" iface="eth0" netns="" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:53.576 [INFO][5955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:53.576 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.427 [INFO][5975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.430 [INFO][5975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.431 [INFO][5975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.454 [WARNING][5975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.455 [INFO][5975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.459 [INFO][5975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:54.478161 containerd[1973]: 2025-09-13 00:12:54.471 [INFO][5955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:54.478161 containerd[1973]: time="2025-09-13T00:12:54.476168123Z" level=info msg="TearDown network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" successfully" Sep 13 00:12:54.478161 containerd[1973]: time="2025-09-13T00:12:54.476206385Z" level=info msg="StopPodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" returns successfully" Sep 13 00:12:54.480663 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:12:54.850668 containerd[1973]: time="2025-09-13T00:12:54.849977867Z" level=info msg="RemovePodSandbox for \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\"" Sep 13 00:12:54.882968 containerd[1973]: time="2025-09-13T00:12:54.882918152Z" level=info msg="Forcibly stopping sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\"" Sep 13 00:12:55.041987 sshd[5982]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:55.050295 systemd[1]: sshd@11-172.31.31.45:22-139.178.89.65:41332.service: Deactivated successfully. Sep 13 00:12:55.054341 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:12:55.057226 systemd-logind[1958]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:12:55.060629 systemd-logind[1958]: Removed session 12. Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.372 [WARNING][6002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bbf7a35-8d3f-406b-8fc5-674202715e39", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a", Pod:"csi-node-driver-r6h8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic02c8bdc50b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.379 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.379 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" iface="eth0" netns="" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.379 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.379 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.433 [INFO][6017] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.434 [INFO][6017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.434 [INFO][6017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.449 [WARNING][6017] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.449 [INFO][6017] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" HandleID="k8s-pod-network.1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Workload="ip--172--31--31--45-k8s-csi--node--driver--r6h8z-eth0" Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.451 [INFO][6017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:55.460927 containerd[1973]: 2025-09-13 00:12:55.457 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8" Sep 13 00:12:55.464975 containerd[1973]: time="2025-09-13T00:12:55.461587958Z" level=info msg="TearDown network for sandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" successfully" Sep 13 00:12:55.571142 containerd[1973]: time="2025-09-13T00:12:55.571101123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:55.619999 containerd[1973]: time="2025-09-13T00:12:55.619762636Z" level=info msg="RemovePodSandbox \"1f07130bc0090f28d8a1066cfe488395169f3db4cb9a164e5614b987b0cb33f8\" returns successfully" Sep 13 00:12:55.636279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086510496.mount: Deactivated successfully. Sep 13 00:12:55.651211 containerd[1973]: time="2025-09-13T00:12:55.650770925Z" level=info msg="StopPodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\"" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.729 [WARNING][6032] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c13143f6-e5dc-44d2-a955-cb02eb84e81b", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946", Pod:"coredns-674b8bbfcf-pc25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64f0032fe16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.729 [INFO][6032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.729 [INFO][6032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" iface="eth0" netns="" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.729 [INFO][6032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.729 [INFO][6032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.767 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.767 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.767 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.775 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.775 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.778 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:55.784607 containerd[1973]: 2025-09-13 00:12:55.781 [INFO][6032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.786142 containerd[1973]: time="2025-09-13T00:12:55.784571225Z" level=info msg="TearDown network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" successfully" Sep 13 00:12:55.786142 containerd[1973]: time="2025-09-13T00:12:55.784689971Z" level=info msg="StopPodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" returns successfully" Sep 13 00:12:55.786142 containerd[1973]: time="2025-09-13T00:12:55.785926339Z" level=info msg="RemovePodSandbox for \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\"" Sep 13 00:12:55.786142 containerd[1973]: time="2025-09-13T00:12:55.785960699Z" level=info msg="Forcibly stopping sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\"" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.855 [WARNING][6056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c13143f6-e5dc-44d2-a955-cb02eb84e81b", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"0d1d07e1a080bba5a53494a95b2f2a55bee7d7ab534ed325986e9fb964c2c946", Pod:"coredns-674b8bbfcf-pc25k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64f0032fe16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.855 [INFO][6056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.855 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" iface="eth0" netns="" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.855 [INFO][6056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.855 [INFO][6056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.899 [INFO][6065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.899 [INFO][6065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.899 [INFO][6065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.910 [WARNING][6065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.911 [INFO][6065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" HandleID="k8s-pod-network.2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--pc25k-eth0" Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.914 [INFO][6065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:55.919678 containerd[1973]: 2025-09-13 00:12:55.916 [INFO][6056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e" Sep 13 00:12:55.920972 containerd[1973]: time="2025-09-13T00:12:55.920516664Z" level=info msg="TearDown network for sandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" successfully" Sep 13 00:12:55.941151 containerd[1973]: time="2025-09-13T00:12:55.940685300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:55.941151 containerd[1973]: time="2025-09-13T00:12:55.940767205Z" level=info msg="RemovePodSandbox \"2119827b927352247d8389839b3beaf301cbf5a2cbaa53ef6e7471c8af3b6f3e\" returns successfully" Sep 13 00:12:55.941684 containerd[1973]: time="2025-09-13T00:12:55.941577013Z" level=info msg="StopPodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\"" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.005 [WARNING][6079] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.005 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.006 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" iface="eth0" netns="" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.006 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.006 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.056 [INFO][6086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.057 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.057 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.070 [WARNING][6086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.070 [INFO][6086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.073 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:56.083914 containerd[1973]: 2025-09-13 00:12:56.078 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.087613 containerd[1973]: time="2025-09-13T00:12:56.084073383Z" level=info msg="TearDown network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" successfully" Sep 13 00:12:56.087613 containerd[1973]: time="2025-09-13T00:12:56.084110151Z" level=info msg="StopPodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" returns successfully" Sep 13 00:12:56.087613 containerd[1973]: time="2025-09-13T00:12:56.084862695Z" level=info msg="RemovePodSandbox for \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\"" Sep 13 00:12:56.087613 containerd[1973]: time="2025-09-13T00:12:56.084894368Z" level=info msg="Forcibly stopping sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\"" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.146 [WARNING][6101] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" WorkloadEndpoint="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.146 [INFO][6101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.147 [INFO][6101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" iface="eth0" netns="" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.147 [INFO][6101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.147 [INFO][6101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.262 [INFO][6108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.262 [INFO][6108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.263 [INFO][6108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.275 [WARNING][6108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.275 [INFO][6108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" HandleID="k8s-pod-network.cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Workload="ip--172--31--31--45-k8s-whisker--74b4d94f48--g9zgz-eth0" Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.278 [INFO][6108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:56.287928 containerd[1973]: 2025-09-13 00:12:56.284 [INFO][6101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e" Sep 13 00:12:56.323596 containerd[1973]: time="2025-09-13T00:12:56.288501964Z" level=info msg="TearDown network for sandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" successfully" Sep 13 00:12:56.323596 containerd[1973]: time="2025-09-13T00:12:56.294685012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:56.323596 containerd[1973]: time="2025-09-13T00:12:56.294762473Z" level=info msg="RemovePodSandbox \"cf5d0f48e8a4e0ff62c3f2f1530979bf07f4559b207823e2d8fb6e7de4009e0e\" returns successfully" Sep 13 00:12:56.323596 containerd[1973]: time="2025-09-13T00:12:56.296037507Z" level=info msg="StopPodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\"" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.376 [WARNING][6122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d", Pod:"coredns-674b8bbfcf-96m5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68ca67d3d66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.376 [INFO][6122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.376 [INFO][6122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" iface="eth0" netns="" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.376 [INFO][6122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.377 [INFO][6122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.432 [INFO][6129] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.432 [INFO][6129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.432 [INFO][6129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.443 [WARNING][6129] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.443 [INFO][6129] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.447 [INFO][6129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:56.461547 containerd[1973]: 2025-09-13 00:12:56.452 [INFO][6122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.461547 containerd[1973]: time="2025-09-13T00:12:56.458478056Z" level=info msg="TearDown network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" successfully" Sep 13 00:12:56.461547 containerd[1973]: time="2025-09-13T00:12:56.458510157Z" level=info msg="StopPodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" returns successfully" Sep 13 00:12:56.491765 containerd[1973]: time="2025-09-13T00:12:56.483894940Z" level=info msg="RemovePodSandbox for \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\"" Sep 13 00:12:56.491765 containerd[1973]: time="2025-09-13T00:12:56.483950767Z" level=info msg="Forcibly stopping sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\"" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.640 [WARNING][6143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a3bb9403-ed1c-46f5-b8ac-f060a9d357fb", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"b026102e6fddd4d9ed047e10f3b0f3e613fe18b78e2f86fb060a569e13c1e88d", Pod:"coredns-674b8bbfcf-96m5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68ca67d3d66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.641 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.641 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" iface="eth0" netns="" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.641 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.641 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.685 [INFO][6150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.685 [INFO][6150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.685 [INFO][6150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.709 [WARNING][6150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.709 [INFO][6150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" HandleID="k8s-pod-network.853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Workload="ip--172--31--31--45-k8s-coredns--674b8bbfcf--96m5z-eth0" Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.714 [INFO][6150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:56.724041 containerd[1973]: 2025-09-13 00:12:56.718 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c" Sep 13 00:12:56.726624 containerd[1973]: time="2025-09-13T00:12:56.724008456Z" level=info msg="TearDown network for sandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" successfully" Sep 13 00:12:56.733035 containerd[1973]: time="2025-09-13T00:12:56.732990317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:56.733513 containerd[1973]: time="2025-09-13T00:12:56.733474326Z" level=info msg="RemovePodSandbox \"853fd26a191734a823df6b83075396d84c72f02bc4881baf0a8efa8c57ad226c\" returns successfully" Sep 13 00:12:56.735138 containerd[1973]: time="2025-09-13T00:12:56.735100775Z" level=info msg="StopPodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\"" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.804 [WARNING][6165] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0", GenerateName:"calico-kube-controllers-5448c99594-", Namespace:"calico-system", SelfLink:"", UID:"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5448c99594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7", Pod:"calico-kube-controllers-5448c99594-x6ss9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da32fcfe75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.806 [INFO][6165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.806 [INFO][6165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" iface="eth0" netns="" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.806 [INFO][6165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.806 [INFO][6165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.876 [INFO][6172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.876 [INFO][6172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.878 [INFO][6172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.887 [WARNING][6172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.887 [INFO][6172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.890 [INFO][6172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:56.898143 containerd[1973]: 2025-09-13 00:12:56.894 [INFO][6165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:56.898143 containerd[1973]: time="2025-09-13T00:12:56.897594013Z" level=info msg="TearDown network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" successfully" Sep 13 00:12:56.898143 containerd[1973]: time="2025-09-13T00:12:56.897629373Z" level=info msg="StopPodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" returns successfully" Sep 13 00:12:56.932501 containerd[1973]: time="2025-09-13T00:12:56.932457998Z" level=info msg="RemovePodSandbox for \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\"" Sep 13 00:12:56.932632 containerd[1973]: time="2025-09-13T00:12:56.932508099Z" level=info msg="Forcibly stopping sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\"" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:56.998 [WARNING][6186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0", GenerateName:"calico-kube-controllers-5448c99594-", Namespace:"calico-system", SelfLink:"", UID:"b2cc29fb-8087-44c7-b1ba-93ec8b4c09fd", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5448c99594", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"18bf9c17e1f496027e5709613a73fbc9e72911faaac859e62a987b3c9ea5d7b7", Pod:"calico-kube-controllers-5448c99594-x6ss9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7da32fcfe75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:56.998 [INFO][6186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:56.998 [INFO][6186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" iface="eth0" netns="" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:56.998 [INFO][6186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:56.998 [INFO][6186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.070 [INFO][6194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.071 [INFO][6194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.071 [INFO][6194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.103 [WARNING][6194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.103 [INFO][6194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" HandleID="k8s-pod-network.9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Workload="ip--172--31--31--45-k8s-calico--kube--controllers--5448c99594--x6ss9-eth0" Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.110 [INFO][6194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:57.121642 containerd[1973]: 2025-09-13 00:12:57.115 [INFO][6186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b" Sep 13 00:12:57.125161 containerd[1973]: time="2025-09-13T00:12:57.125113433Z" level=info msg="TearDown network for sandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" successfully" Sep 13 00:12:57.144003 containerd[1973]: time="2025-09-13T00:12:57.143339176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:57.144003 containerd[1973]: time="2025-09-13T00:12:57.143410493Z" level=info msg="RemovePodSandbox \"9e90ac67de70e44f3a68a272ef014d0e0f614024f291e15c5097ee6c8d72603b\" returns successfully" Sep 13 00:12:57.479332 containerd[1973]: time="2025-09-13T00:12:57.479182647Z" level=info msg="StopPodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\"" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.564 [WARNING][6209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"e424bb58-dcec-4399-856b-a30a0d9831b0", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081", Pod:"calico-apiserver-856f469597-bn26z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1db7c9ec3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.564 [INFO][6209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.564 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" iface="eth0" netns="" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.564 [INFO][6209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.564 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.684 [INFO][6216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.684 [INFO][6216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.684 [INFO][6216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.692 [WARNING][6216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.692 [INFO][6216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.694 [INFO][6216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:57.699821 containerd[1973]: 2025-09-13 00:12:57.696 [INFO][6209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.701942 containerd[1973]: time="2025-09-13T00:12:57.699866853Z" level=info msg="TearDown network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" successfully" Sep 13 00:12:57.701942 containerd[1973]: time="2025-09-13T00:12:57.699896079Z" level=info msg="StopPodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" returns successfully" Sep 13 00:12:57.701942 containerd[1973]: time="2025-09-13T00:12:57.700732488Z" level=info msg="RemovePodSandbox for \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\"" Sep 13 00:12:57.701942 containerd[1973]: time="2025-09-13T00:12:57.700759789Z" level=info msg="Forcibly stopping sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\"" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.773 [WARNING][6230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"e424bb58-dcec-4399-856b-a30a0d9831b0", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"bc835e686b80b0ea4b0adccd1d17a52a6935f6aebdd7b3f9fdfdd440cafa7081", Pod:"calico-apiserver-856f469597-bn26z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1db7c9ec3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.774 [INFO][6230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.774 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" iface="eth0" netns="" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.774 [INFO][6230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.774 [INFO][6230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.848 [INFO][6241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.849 [INFO][6241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.850 [INFO][6241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.859 [WARNING][6241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.859 [INFO][6241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" HandleID="k8s-pod-network.7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--bn26z-eth0" Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.861 [INFO][6241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:57.867019 containerd[1973]: 2025-09-13 00:12:57.864 [INFO][6230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21" Sep 13 00:12:57.873067 containerd[1973]: time="2025-09-13T00:12:57.867072478Z" level=info msg="TearDown network for sandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" successfully" Sep 13 00:12:57.914343 containerd[1973]: time="2025-09-13T00:12:57.914289882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:57.914497 containerd[1973]: time="2025-09-13T00:12:57.914362327Z" level=info msg="RemovePodSandbox \"7c922ae204b2f2651bd1706b223589ca552794a8705db555ec88fe45005f8f21\" returns successfully" Sep 13 00:12:57.915642 containerd[1973]: time="2025-09-13T00:12:57.915594306Z" level=info msg="StopPodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\"" Sep 13 00:12:58.001339 containerd[1973]: time="2025-09-13T00:12:58.001093745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:58.085863 containerd[1973]: time="2025-09-13T00:12:58.001599313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:57.987 [WARNING][6255] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc1327ab-9550-47ae-bd83-f41e2cea6ca2", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0", Pod:"goldmane-54d579b49d-l7k22", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0437f99b393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:57.990 [INFO][6255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:57.990 [INFO][6255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" iface="eth0" netns="" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:57.990 [INFO][6255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:57.990 [INFO][6255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.088 [INFO][6262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.089 [INFO][6262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.089 [INFO][6262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.105 [WARNING][6262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.105 [INFO][6262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.108 [INFO][6262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:58.127946 containerd[1973]: 2025-09-13 00:12:58.111 [INFO][6255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.127946 containerd[1973]: time="2025-09-13T00:12:58.126370358Z" level=info msg="TearDown network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" successfully" Sep 13 00:12:58.127946 containerd[1973]: time="2025-09-13T00:12:58.126397031Z" level=info msg="StopPodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" returns successfully" Sep 13 00:12:58.130786 containerd[1973]: time="2025-09-13T00:12:58.129165089Z" level=info msg="RemovePodSandbox for \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\"" Sep 13 00:12:58.130786 containerd[1973]: time="2025-09-13T00:12:58.129205467Z" level=info msg="Forcibly stopping sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\"" Sep 13 00:12:58.152104 containerd[1973]: time="2025-09-13T00:12:58.152002677Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:58.198668 containerd[1973]: time="2025-09-13T00:12:58.198619427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:58.202824 containerd[1973]: time="2025-09-13T00:12:58.201382130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 9.036656079s" Sep 13 00:12:58.214654 containerd[1973]: time="2025-09-13T00:12:58.214550335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.207 [WARNING][6276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc1327ab-9550-47ae-bd83-f41e2cea6ca2", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0", Pod:"goldmane-54d579b49d-l7k22", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0437f99b393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.209 [INFO][6276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.209 [INFO][6276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" iface="eth0" netns="" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.209 [INFO][6276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.209 [INFO][6276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.249 [INFO][6283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.249 [INFO][6283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.249 [INFO][6283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.258 [WARNING][6283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.258 [INFO][6283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" HandleID="k8s-pod-network.f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Workload="ip--172--31--31--45-k8s-goldmane--54d579b49d--l7k22-eth0" Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.260 [INFO][6283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:58.266906 containerd[1973]: 2025-09-13 00:12:58.263 [INFO][6276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e" Sep 13 00:12:58.269093 containerd[1973]: time="2025-09-13T00:12:58.266976649Z" level=info msg="TearDown network for sandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" successfully" Sep 13 00:12:58.289522 containerd[1973]: time="2025-09-13T00:12:58.289472708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:58.289640 containerd[1973]: time="2025-09-13T00:12:58.289542821Z" level=info msg="RemovePodSandbox \"f812dddfbf8daea55858cdfeaa42dffffa7c78bd7136c68f5f63d2f09fd2974e\" returns successfully" Sep 13 00:12:58.291010 containerd[1973]: time="2025-09-13T00:12:58.290977695Z" level=info msg="StopPodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\"" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.354 [WARNING][6298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"39751ac3-0924-426f-b77b-8d83a90311a5", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820", Pod:"calico-apiserver-856f469597-zpqrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2764d338f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.354 [INFO][6298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.355 [INFO][6298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" iface="eth0" netns="" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.355 [INFO][6298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.355 [INFO][6298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.391 [INFO][6305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.392 [INFO][6305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.392 [INFO][6305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.401 [WARNING][6305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.401 [INFO][6305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.403 [INFO][6305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:58.419241 containerd[1973]: 2025-09-13 00:12:58.409 [INFO][6298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.419241 containerd[1973]: time="2025-09-13T00:12:58.417008776Z" level=info msg="TearDown network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" successfully" Sep 13 00:12:58.419241 containerd[1973]: time="2025-09-13T00:12:58.418171526Z" level=info msg="StopPodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" returns successfully" Sep 13 00:12:58.431470 containerd[1973]: time="2025-09-13T00:12:58.430214672Z" level=info msg="RemovePodSandbox for \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\"" Sep 13 00:12:58.431470 containerd[1973]: time="2025-09-13T00:12:58.430262009Z" level=info msg="Forcibly stopping sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\"" Sep 13 00:12:58.436452 containerd[1973]: time="2025-09-13T00:12:58.436399663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.611 [WARNING][6319] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0", GenerateName:"calico-apiserver-856f469597-", Namespace:"calico-apiserver", SelfLink:"", UID:"39751ac3-0924-426f-b77b-8d83a90311a5", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"856f469597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-45", ContainerID:"880bce9bffc1bc1e394d7d87b20ae52948943e411a12d0e8c2e1df0b4410e820", Pod:"calico-apiserver-856f469597-zpqrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2764d338f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.616 [INFO][6319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.616 [INFO][6319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" iface="eth0" netns="" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.616 [INFO][6319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.616 [INFO][6319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.757 [INFO][6327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.758 [INFO][6327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.758 [INFO][6327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.774 [WARNING][6327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.774 [INFO][6327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" HandleID="k8s-pod-network.4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Workload="ip--172--31--31--45-k8s-calico--apiserver--856f469597--zpqrf-eth0" Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.776 [INFO][6327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:58.791323 containerd[1973]: 2025-09-13 00:12:58.782 [INFO][6319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05" Sep 13 00:12:58.791323 containerd[1973]: time="2025-09-13T00:12:58.789565057Z" level=info msg="TearDown network for sandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" successfully" Sep 13 00:12:58.847983 containerd[1973]: time="2025-09-13T00:12:58.847622218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:58.847983 containerd[1973]: time="2025-09-13T00:12:58.847770559Z" level=info msg="RemovePodSandbox \"4a78c7ddb1a58fdddde335be31a75ba81df44bcc7e693a23b1888836ef56fe05\" returns successfully" Sep 13 00:12:58.991617 containerd[1973]: time="2025-09-13T00:12:58.991555338Z" level=info msg="CreateContainer within sandbox \"858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:12:59.306709 containerd[1973]: time="2025-09-13T00:12:59.306665275Z" level=info msg="CreateContainer within sandbox \"858d18fdea27c34a1e554241cbf795de95fc4e08b78168367327a8fe789e29a0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e\"" Sep 13 00:12:59.313446 containerd[1973]: time="2025-09-13T00:12:59.313364734Z" level=info msg="StartContainer for \"40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e\"" Sep 13 00:12:59.600946 systemd[1]: run-containerd-runc-k8s.io-40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e-runc.hxrVJ0.mount: Deactivated successfully. Sep 13 00:12:59.615026 systemd[1]: Started cri-containerd-40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e.scope - libcontainer container 40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e. Sep 13 00:12:59.716594 containerd[1973]: time="2025-09-13T00:12:59.716550950Z" level=info msg="StartContainer for \"40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e\" returns successfully" Sep 13 00:13:00.155238 systemd[1]: Started sshd@12-172.31.31.45:22-139.178.89.65:42004.service - OpenSSH per-connection server daemon (139.178.89.65:42004). Sep 13 00:13:00.594716 sshd[6372]: Accepted publickey for core from 139.178.89.65 port 42004 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:00.603898 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:00.626273 systemd-logind[1958]: New session 13 of user core. Sep 13 00:13:00.632811 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:13:00.656461 kubelet[3190]: I0913 00:13:00.622454 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-l7k22" podStartSLOduration=29.71543497 podStartE2EDuration="49.566364524s" podCreationTimestamp="2025-09-13 00:12:11 +0000 UTC" firstStartedPulling="2025-09-13 00:12:38.555364593 +0000 UTC m=+46.796097112" lastFinishedPulling="2025-09-13 00:12:58.406294123 +0000 UTC m=+66.647026666" observedRunningTime="2025-09-13 00:13:00.448411622 +0000 UTC m=+68.689144162" watchObservedRunningTime="2025-09-13 00:13:00.566364524 +0000 UTC m=+68.807097077" Sep 13 00:13:00.711918 containerd[1973]: time="2025-09-13T00:13:00.711865383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:00.712974 containerd[1973]: time="2025-09-13T00:13:00.712824750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:13:00.714134 containerd[1973]: time="2025-09-13T00:13:00.713957749Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:00.716809 containerd[1973]: time="2025-09-13T00:13:00.716706533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:00.717375 containerd[1973]: time="2025-09-13T00:13:00.717206805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.280234525s" Sep 13 00:13:00.717375 containerd[1973]: time="2025-09-13T00:13:00.717237896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:13:00.721815 containerd[1973]: time="2025-09-13T00:13:00.721035582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:13:00.726986 containerd[1973]: time="2025-09-13T00:13:00.726893613Z" level=info msg="CreateContainer within sandbox \"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:13:00.760256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65351507.mount: Deactivated successfully. Sep 13 00:13:00.771818 containerd[1973]: time="2025-09-13T00:13:00.771338931Z" level=info msg="CreateContainer within sandbox \"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ec56536b166276067e81529c9a78d57fcbdafdce63aee1f24d205e19ec644e1c\"" Sep 13 00:13:00.774178 containerd[1973]: time="2025-09-13T00:13:00.774130563Z" level=info msg="StartContainer for \"ec56536b166276067e81529c9a78d57fcbdafdce63aee1f24d205e19ec644e1c\"" Sep 13 00:13:00.841925 systemd[1]: run-containerd-runc-k8s.io-ec56536b166276067e81529c9a78d57fcbdafdce63aee1f24d205e19ec644e1c-runc.2QzBR0.mount: Deactivated successfully. Sep 13 00:13:00.852654 systemd[1]: Started cri-containerd-ec56536b166276067e81529c9a78d57fcbdafdce63aee1f24d205e19ec644e1c.scope - libcontainer container ec56536b166276067e81529c9a78d57fcbdafdce63aee1f24d205e19ec644e1c. Sep 13 00:13:00.913350 containerd[1973]: time="2025-09-13T00:13:00.913180292Z" level=info msg="StartContainer for \"ec56536b166276067e81529c9a78d57fcbdafdce63aee1f24d205e19ec644e1c\" returns successfully" Sep 13 00:13:02.942987 sshd[6372]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:03.033766 systemd[1]: sshd@12-172.31.31.45:22-139.178.89.65:42004.service: Deactivated successfully. Sep 13 00:13:03.097557 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:13:03.101254 systemd-logind[1958]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:13:03.112512 systemd[1]: Started sshd@13-172.31.31.45:22-139.178.89.65:42006.service - OpenSSH per-connection server daemon (139.178.89.65:42006). Sep 13 00:13:03.117393 systemd-logind[1958]: Removed session 13. Sep 13 00:13:03.494708 kubelet[3190]: I0913 00:13:03.494159 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:13:03.521117 sshd[6471]: Accepted publickey for core from 139.178.89.65 port 42006 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:03.521508 sshd[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:03.542123 systemd-logind[1958]: New session 14 of user core. Sep 13 00:13:03.550004 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:13:04.618892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055549469.mount: Deactivated successfully. Sep 13 00:13:06.180902 containerd[1973]: time="2025-09-13T00:13:06.180845406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:06.257676 containerd[1973]: time="2025-09-13T00:13:06.241397552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:13:06.332452 containerd[1973]: time="2025-09-13T00:13:06.330988973Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:06.426942 containerd[1973]: time="2025-09-13T00:13:06.426557199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:06.432259 containerd[1973]: time="2025-09-13T00:13:06.432116176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.708390757s" Sep 13 00:13:06.438523 containerd[1973]: time="2025-09-13T00:13:06.438455892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:13:06.439965 containerd[1973]: time="2025-09-13T00:13:06.439919926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:13:06.828927 containerd[1973]: time="2025-09-13T00:13:06.828884018Z" level=info msg="CreateContainer within sandbox \"caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:13:06.934087 systemd[1]: run-containerd-runc-k8s.io-c7628c4f133a11698f9283da522a1e4700a79f8062ad79632025b592bd0c18cd-runc.SxlQmD.mount: Deactivated successfully. Sep 13 00:13:07.323887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990045260.mount: Deactivated successfully. Sep 13 00:13:07.365078 containerd[1973]: time="2025-09-13T00:13:07.364768060Z" level=info msg="CreateContainer within sandbox \"caaf25aaf1065acdb560afa08a1964034c0146e652100512e65e9e7503ae7bf4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cd06828a85b4963e8792ba2073275f6c86649bc5b47da1eb7254c892cb4d00e6\"" Sep 13 00:13:07.391115 containerd[1973]: time="2025-09-13T00:13:07.389980785Z" level=info msg="StartContainer for \"cd06828a85b4963e8792ba2073275f6c86649bc5b47da1eb7254c892cb4d00e6\"" Sep 13 00:13:07.539671 systemd[1]: Started cri-containerd-cd06828a85b4963e8792ba2073275f6c86649bc5b47da1eb7254c892cb4d00e6.scope - libcontainer container cd06828a85b4963e8792ba2073275f6c86649bc5b47da1eb7254c892cb4d00e6. Sep 13 00:13:07.784079 containerd[1973]: time="2025-09-13T00:13:07.783960071Z" level=info msg="StartContainer for \"cd06828a85b4963e8792ba2073275f6c86649bc5b47da1eb7254c892cb4d00e6\" returns successfully" Sep 13 00:13:08.670179 containerd[1973]: time="2025-09-13T00:13:08.670126404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:08.671485 containerd[1973]: time="2025-09-13T00:13:08.671411064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:13:08.672701 containerd[1973]: time="2025-09-13T00:13:08.672657068Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:08.674757 containerd[1973]: time="2025-09-13T00:13:08.674695680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:08.676099 containerd[1973]: time="2025-09-13T00:13:08.675514020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.23555616s" Sep 13 00:13:08.676099 containerd[1973]: time="2025-09-13T00:13:08.675557524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:13:08.770319 containerd[1973]: time="2025-09-13T00:13:08.770272216Z" level=info msg="CreateContainer within sandbox \"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:13:08.808452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744918465.mount: Deactivated successfully. Sep 13 00:13:08.819850 containerd[1973]: time="2025-09-13T00:13:08.819490644Z" level=info msg="CreateContainer within sandbox \"6eebdc598eff40c8d597ee4e50965295d5273e16e734bc6b0029c3d158e3868a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0c2a9690bf1c72dfde143f61462f7753647e38677f176f39e9dcd2ae95144a85\"" Sep 13 00:13:08.823625 containerd[1973]: time="2025-09-13T00:13:08.822194539Z" level=info msg="StartContainer for \"0c2a9690bf1c72dfde143f61462f7753647e38677f176f39e9dcd2ae95144a85\"" Sep 13 00:13:09.048800 systemd[1]: Started cri-containerd-0c2a9690bf1c72dfde143f61462f7753647e38677f176f39e9dcd2ae95144a85.scope - libcontainer container 0c2a9690bf1c72dfde143f61462f7753647e38677f176f39e9dcd2ae95144a85. Sep 13 00:13:09.086406 sshd[6471]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:09.150590 systemd[1]: sshd@13-172.31.31.45:22-139.178.89.65:42006.service: Deactivated successfully. Sep 13 00:13:09.156246 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:13:09.164183 systemd-logind[1958]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:13:09.170170 containerd[1973]: time="2025-09-13T00:13:09.169939061Z" level=info msg="StartContainer for \"0c2a9690bf1c72dfde143f61462f7753647e38677f176f39e9dcd2ae95144a85\" returns successfully" Sep 13 00:13:09.175049 systemd[1]: Started sshd@14-172.31.31.45:22-139.178.89.65:42012.service - OpenSSH per-connection server daemon (139.178.89.65:42012). Sep 13 00:13:09.181640 systemd-logind[1958]: Removed session 14. Sep 13 00:13:09.551282 sshd[6609]: Accepted publickey for core from 139.178.89.65 port 42012 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:09.557193 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:09.582787 systemd-logind[1958]: New session 15 of user core. Sep 13 00:13:09.588673 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:13:09.735124 kubelet[3190]: I0913 00:13:09.679521 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r6h8z" podStartSLOduration=27.515328964 podStartE2EDuration="57.621030832s" podCreationTimestamp="2025-09-13 00:12:12 +0000 UTC" firstStartedPulling="2025-09-13 00:12:38.570781312 +0000 UTC m=+46.811513832" lastFinishedPulling="2025-09-13 00:13:08.676483168 +0000 UTC m=+76.917215700" observedRunningTime="2025-09-13 00:13:09.616256978 +0000 UTC m=+77.856989521" watchObservedRunningTime="2025-09-13 00:13:09.621030832 +0000 UTC m=+77.861763372" Sep 13 00:13:09.744363 kubelet[3190]: I0913 00:13:09.736523 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c94df5fc9-qtgd9" podStartSLOduration=5.990909996 podStartE2EDuration="34.736501664s" podCreationTimestamp="2025-09-13 00:12:35 +0000 UTC" firstStartedPulling="2025-09-13 00:12:37.694099747 +0000 UTC m=+45.934832265" lastFinishedPulling="2025-09-13 00:13:06.439691407 +0000 UTC m=+74.680423933" observedRunningTime="2025-09-13 00:13:08.472731422 +0000 UTC m=+76.713463966" watchObservedRunningTime="2025-09-13 00:13:09.736501664 +0000 UTC m=+77.977234204" Sep 13 00:13:10.156567 kubelet[3190]: I0913 00:13:10.153474 3190 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:13:10.159696 kubelet[3190]: I0913 00:13:10.159581 3190 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:13:10.966801 sshd[6609]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:10.977607 systemd[1]: sshd@14-172.31.31.45:22-139.178.89.65:42012.service: Deactivated successfully. Sep 13 00:13:10.983239 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:13:10.985611 systemd-logind[1958]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:13:11.011354 systemd[1]: Started sshd@15-172.31.31.45:22-139.178.89.65:58078.service - OpenSSH per-connection server daemon (139.178.89.65:58078). Sep 13 00:13:11.014110 systemd-logind[1958]: Removed session 15. Sep 13 00:13:11.239936 sshd[6633]: Accepted publickey for core from 139.178.89.65 port 58078 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:11.241349 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:11.246590 systemd-logind[1958]: New session 16 of user core. Sep 13 00:13:11.252886 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:13:12.445568 sshd[6633]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:12.459030 systemd-logind[1958]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:13:12.459259 systemd[1]: sshd@15-172.31.31.45:22-139.178.89.65:58078.service: Deactivated successfully. Sep 13 00:13:12.464264 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:13:12.481319 systemd-logind[1958]: Removed session 16. Sep 13 00:13:12.493889 systemd[1]: Started sshd@16-172.31.31.45:22-139.178.89.65:58086.service - OpenSSH per-connection server daemon (139.178.89.65:58086). Sep 13 00:13:12.768928 sshd[6647]: Accepted publickey for core from 139.178.89.65 port 58086 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:12.771355 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:12.782903 systemd-logind[1958]: New session 17 of user core. Sep 13 00:13:12.789705 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:13:13.102256 sshd[6647]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:13.108002 systemd[1]: sshd@16-172.31.31.45:22-139.178.89.65:58086.service: Deactivated successfully. Sep 13 00:13:13.113844 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:13:13.115945 systemd-logind[1958]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:13:13.118201 systemd-logind[1958]: Removed session 17. Sep 13 00:13:18.145041 systemd[1]: Started sshd@17-172.31.31.45:22-139.178.89.65:58090.service - OpenSSH per-connection server daemon (139.178.89.65:58090). Sep 13 00:13:18.389970 sshd[6671]: Accepted publickey for core from 139.178.89.65 port 58090 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:18.391621 sshd[6671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:18.398110 systemd-logind[1958]: New session 18 of user core. Sep 13 00:13:18.403666 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:13:18.994504 sshd[6671]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:18.999402 systemd-logind[1958]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:13:19.000014 systemd[1]: sshd@17-172.31.31.45:22-139.178.89.65:58090.service: Deactivated successfully. Sep 13 00:13:19.003684 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:13:19.005125 systemd-logind[1958]: Removed session 18. Sep 13 00:13:24.038892 systemd[1]: Started sshd@18-172.31.31.45:22-139.178.89.65:42676.service - OpenSSH per-connection server daemon (139.178.89.65:42676). Sep 13 00:13:24.278053 sshd[6705]: Accepted publickey for core from 139.178.89.65 port 42676 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:24.279112 sshd[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:24.285910 systemd-logind[1958]: New session 19 of user core. Sep 13 00:13:24.291672 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:13:24.727329 sshd[6705]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:24.735130 systemd[1]: sshd@18-172.31.31.45:22-139.178.89.65:42676.service: Deactivated successfully. Sep 13 00:13:24.737137 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:13:24.738177 systemd-logind[1958]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:13:24.740500 systemd-logind[1958]: Removed session 19. Sep 13 00:13:29.763756 systemd[1]: Started sshd@19-172.31.31.45:22-139.178.89.65:42692.service - OpenSSH per-connection server daemon (139.178.89.65:42692). Sep 13 00:13:29.957451 sshd[6718]: Accepted publickey for core from 139.178.89.65 port 42692 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:29.960001 sshd[6718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:29.967579 systemd-logind[1958]: New session 20 of user core. Sep 13 00:13:29.971480 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:13:30.828818 sshd[6718]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:30.837797 systemd[1]: sshd@19-172.31.31.45:22-139.178.89.65:42692.service: Deactivated successfully. Sep 13 00:13:30.840266 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:13:30.842010 systemd-logind[1958]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:13:30.845294 systemd-logind[1958]: Removed session 20. Sep 13 00:13:35.871855 systemd[1]: Started sshd@20-172.31.31.45:22-139.178.89.65:38934.service - OpenSSH per-connection server daemon (139.178.89.65:38934). Sep 13 00:13:36.238055 sshd[6776]: Accepted publickey for core from 139.178.89.65 port 38934 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:36.241593 sshd[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:36.248685 systemd-logind[1958]: New session 21 of user core. Sep 13 00:13:36.256667 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:13:37.851571 sshd[6776]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:37.857963 systemd-logind[1958]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:13:37.859474 systemd[1]: sshd@20-172.31.31.45:22-139.178.89.65:38934.service: Deactivated successfully. Sep 13 00:13:37.862534 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:13:37.869857 systemd-logind[1958]: Removed session 21. Sep 13 00:13:42.881783 systemd[1]: Started sshd@21-172.31.31.45:22-139.178.89.65:39200.service - OpenSSH per-connection server daemon (139.178.89.65:39200). Sep 13 00:13:43.163596 sshd[6794]: Accepted publickey for core from 139.178.89.65 port 39200 ssh2: RSA SHA256:KU1t3gEti39DZFp39xuKP7xBDpSomUw4fD6jPTPu1ho Sep 13 00:13:43.164801 sshd[6794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:43.175758 systemd-logind[1958]: New session 22 of user core. Sep 13 00:13:43.183650 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:13:43.996922 sshd[6794]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:44.006051 systemd[1]: sshd@21-172.31.31.45:22-139.178.89.65:39200.service: Deactivated successfully. Sep 13 00:13:44.013494 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:13:44.018203 systemd-logind[1958]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:13:44.020360 systemd-logind[1958]: Removed session 22. Sep 13 00:13:57.994535 systemd[1]: cri-containerd-58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914.scope: Deactivated successfully. Sep 13 00:13:57.996896 systemd[1]: cri-containerd-58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914.scope: Consumed 4.229s CPU time, 37.6M memory peak, 0B memory swap peak. Sep 13 00:13:58.282957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914-rootfs.mount: Deactivated successfully. Sep 13 00:13:58.354629 containerd[1973]: time="2025-09-13T00:13:58.331161046Z" level=info msg="shim disconnected" id=58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914 namespace=k8s.io Sep 13 00:13:58.354629 containerd[1973]: time="2025-09-13T00:13:58.354623876Z" level=warning msg="cleaning up after shim disconnected" id=58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914 namespace=k8s.io Sep 13 00:13:58.357722 containerd[1973]: time="2025-09-13T00:13:58.354645501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:13:58.937133 kubelet[3190]: I0913 00:13:58.937035 3190 scope.go:117] "RemoveContainer" containerID="58197c15a5de2792bac58fc84b61e79c2e5f9d8968c1231768585555b718f914" Sep 13 00:13:59.023526 containerd[1973]: time="2025-09-13T00:13:59.023392005Z" level=info msg="CreateContainer within sandbox \"4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:13:59.127550 systemd[1]: cri-containerd-84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7.scope: Deactivated successfully. Sep 13 00:13:59.128159 systemd[1]: cri-containerd-84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7.scope: Consumed 10.467s CPU time. Sep 13 00:13:59.191892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772741037.mount: Deactivated successfully. Sep 13 00:13:59.207451 containerd[1973]: time="2025-09-13T00:13:59.205510040Z" level=info msg="shim disconnected" id=84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7 namespace=k8s.io Sep 13 00:13:59.207451 containerd[1973]: time="2025-09-13T00:13:59.205580479Z" level=warning msg="cleaning up after shim disconnected" id=84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7 namespace=k8s.io Sep 13 00:13:59.207451 containerd[1973]: time="2025-09-13T00:13:59.205595293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:13:59.212321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7-rootfs.mount: Deactivated successfully. Sep 13 00:13:59.229623 containerd[1973]: time="2025-09-13T00:13:59.229391904Z" level=info msg="CreateContainer within sandbox \"4d27745c2d16fad1873633ff68bb6d0f8daebcb49b3d3f2cab9e1fa24a1be2f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"96dc1d8c1f24058fd3497d0a7f991a740cf3cbca1b285cecebbe06504c9edf7a\"" Sep 13 00:13:59.232350 containerd[1973]: time="2025-09-13T00:13:59.231526477Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:13:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:13:59.239493 containerd[1973]: time="2025-09-13T00:13:59.239452214Z" level=info msg="StartContainer for \"96dc1d8c1f24058fd3497d0a7f991a740cf3cbca1b285cecebbe06504c9edf7a\"" Sep 13 00:13:59.296640 systemd[1]: Started cri-containerd-96dc1d8c1f24058fd3497d0a7f991a740cf3cbca1b285cecebbe06504c9edf7a.scope - libcontainer container 96dc1d8c1f24058fd3497d0a7f991a740cf3cbca1b285cecebbe06504c9edf7a. Sep 13 00:13:59.359155 containerd[1973]: time="2025-09-13T00:13:59.359029022Z" level=info msg="StartContainer for \"96dc1d8c1f24058fd3497d0a7f991a740cf3cbca1b285cecebbe06504c9edf7a\" returns successfully" Sep 13 00:13:59.901468 kubelet[3190]: I0913 00:13:59.901398 3190 scope.go:117] "RemoveContainer" containerID="84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7" Sep 13 00:13:59.926456 containerd[1973]: time="2025-09-13T00:13:59.925779935Z" level=info msg="CreateContainer within sandbox \"f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:13:59.947734 containerd[1973]: time="2025-09-13T00:13:59.947684038Z" level=info msg="CreateContainer within sandbox \"f5f210af41ac261b5b7dbcc90bbdcb1235011f198715f755c7651b534eacb005\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923\"" Sep 13 00:13:59.948300 containerd[1973]: time="2025-09-13T00:13:59.948269094Z" level=info msg="StartContainer for \"1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923\"" Sep 13 00:14:00.027732 systemd[1]: Started cri-containerd-1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923.scope - libcontainer container 1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923. Sep 13 00:14:00.074928 containerd[1973]: time="2025-09-13T00:14:00.074867886Z" level=info msg="StartContainer for \"1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923\" returns successfully" Sep 13 00:14:02.759567 systemd[1]: run-containerd-runc-k8s.io-40f01d4299442ecdae2288de89070a412bfe7c99b85be3f2fa54a6643a82636e-runc.eTyC6U.mount: Deactivated successfully. Sep 13 00:14:04.523470 systemd[1]: cri-containerd-2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285.scope: Deactivated successfully. Sep 13 00:14:04.523699 systemd[1]: cri-containerd-2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285.scope: Consumed 3.222s CPU time, 20.2M memory peak, 0B memory swap peak. Sep 13 00:14:04.554092 containerd[1973]: time="2025-09-13T00:14:04.553686213Z" level=info msg="shim disconnected" id=2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285 namespace=k8s.io Sep 13 00:14:04.554092 containerd[1973]: time="2025-09-13T00:14:04.553783231Z" level=warning msg="cleaning up after shim disconnected" id=2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285 namespace=k8s.io Sep 13 00:14:04.554092 containerd[1973]: time="2025-09-13T00:14:04.553796899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:14:04.558483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285-rootfs.mount: Deactivated successfully. Sep 13 00:14:04.937881 kubelet[3190]: I0913 00:14:04.937845 3190 scope.go:117] "RemoveContainer" containerID="2eb998a9d530e45a4ffbcaf063947aadf4f502a3125182419daaf809b708b285" Sep 13 00:14:04.945477 kubelet[3190]: E0913 00:14:04.944930 3190 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 13 00:14:04.948614 containerd[1973]: time="2025-09-13T00:14:04.948568740Z" level=info msg="CreateContainer within sandbox \"aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:14:04.992170 containerd[1973]: time="2025-09-13T00:14:04.992043104Z" level=info msg="CreateContainer within sandbox \"aa571b909ced263a90098ee7191cad72797d13d7c9087f7a8b69db1447adde70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"adadb703bac6e2dc07550b9b03a77127c1fd9ddc29f2baa001fd5aeb3c52e30b\"" Sep 13 00:14:04.993455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3513707151.mount: Deactivated successfully. Sep 13 00:14:04.996101 containerd[1973]: time="2025-09-13T00:14:04.993808291Z" level=info msg="StartContainer for \"adadb703bac6e2dc07550b9b03a77127c1fd9ddc29f2baa001fd5aeb3c52e30b\"" Sep 13 00:14:05.029685 systemd[1]: Started cri-containerd-adadb703bac6e2dc07550b9b03a77127c1fd9ddc29f2baa001fd5aeb3c52e30b.scope - libcontainer container adadb703bac6e2dc07550b9b03a77127c1fd9ddc29f2baa001fd5aeb3c52e30b. Sep 13 00:14:05.087669 containerd[1973]: time="2025-09-13T00:14:05.087626745Z" level=info msg="StartContainer for \"adadb703bac6e2dc07550b9b03a77127c1fd9ddc29f2baa001fd5aeb3c52e30b\" returns successfully" Sep 13 00:14:12.698829 systemd[1]: cri-containerd-1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923.scope: Deactivated successfully. Sep 13 00:14:12.729899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923-rootfs.mount: Deactivated successfully. Sep 13 00:14:12.741120 containerd[1973]: time="2025-09-13T00:14:12.741046662Z" level=info msg="shim disconnected" id=1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923 namespace=k8s.io Sep 13 00:14:12.741120 containerd[1973]: time="2025-09-13T00:14:12.741145707Z" level=warning msg="cleaning up after shim disconnected" id=1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923 namespace=k8s.io Sep 13 00:14:12.741120 containerd[1973]: time="2025-09-13T00:14:12.741164028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:14:12.995495 kubelet[3190]: I0913 00:14:12.995321 3190 scope.go:117] "RemoveContainer" containerID="84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7" Sep 13 00:14:12.996211 kubelet[3190]: I0913 00:14:12.996140 3190 scope.go:117] "RemoveContainer" containerID="1e87438c34e90eef79e7a9e04b3e66c1f795103798ebdb1d8116055ebe22f923" Sep 13 00:14:13.001174 kubelet[3190]: E0913 00:14:13.001024 3190 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-qwl9v_tigera-operator(30670457-eea1-4cdc-b2df-1da38b856288)\"" pod="tigera-operator/tigera-operator-755d956888-qwl9v" podUID="30670457-eea1-4cdc-b2df-1da38b856288" Sep 13 00:14:13.088085 containerd[1973]: time="2025-09-13T00:14:13.088004680Z" level=info msg="RemoveContainer for \"84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7\"" Sep 13 00:14:13.111559 containerd[1973]: time="2025-09-13T00:14:13.111501573Z" level=info msg="RemoveContainer for \"84583e6d71ee06f6708c99974c8de22e1626c4d3d02823788fdda41da5e368f7\" returns successfully" Sep 13 00:14:14.980000 kubelet[3190]: E0913 00:14:14.979882 3190 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-45?timeout=10s\": context deadline exceeded"