Sep 12 17:36:08.937536 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:36:08.937574 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:08.937593 kernel: BIOS-provided physical RAM map: Sep 12 17:36:08.937605 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:36:08.937616 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:36:08.937630 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Sep 12 17:36:08.937645 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Sep 12 17:36:08.937657 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:36:08.937669 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:36:08.937685 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:36:08.937697 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:36:08.937709 kernel: NX (Execute Disable) protection: active Sep 12 17:36:08.937721 kernel: APIC: Static calls initialized Sep 12 17:36:08.937734 kernel: efi: EFI v2.7 by EDK II Sep 12 17:36:08.937750 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:36:08.937766 kernel: SMBIOS 2.7 present. Sep 12 17:36:08.937780 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:36:08.937793 kernel: Hypervisor detected: KVM Sep 12 17:36:08.937807 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:36:08.937821 kernel: kvm-clock: using sched offset of 4499164486 cycles Sep 12 17:36:08.937835 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:36:08.937849 kernel: tsc: Detected 2499.996 MHz processor Sep 12 17:36:08.937863 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:36:08.937891 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:36:08.937933 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:36:08.937952 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:36:08.937966 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:36:08.937980 kernel: Using GB pages for direct mapping Sep 12 17:36:08.937994 kernel: Secure boot disabled Sep 12 17:36:08.938007 kernel: ACPI: Early table checksum verification disabled Sep 12 17:36:08.938020 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:36:08.938035 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:36:08.938047 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:36:08.938059 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:36:08.938073 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:36:08.938084 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:36:08.938095 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:36:08.938107 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:36:08.938118 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:36:08.938130 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:36:08.938148 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:36:08.938163 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:36:08.938175 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:36:08.938189 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:36:08.938200 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:36:08.938212 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:36:08.938226 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:36:08.938242 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:36:08.938256 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:36:08.938269 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:36:08.938282 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:36:08.938296 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:36:08.938310 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:36:08.938325 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:36:08.938339 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:36:08.938354 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:36:08.938368 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:36:08.938387 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:36:08.938401 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 12 17:36:08.938415 kernel: Zone ranges: Sep 12 17:36:08.938430 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:36:08.938444 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:36:08.938458 kernel: Normal empty Sep 12 17:36:08.938473 kernel: Movable zone start for each node Sep 12 17:36:08.938487 kernel: Early memory node ranges Sep 12 17:36:08.938502 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:36:08.938520 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:36:08.938534 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:36:08.938548 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:36:08.938563 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:36:08.938577 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:36:08.938591 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:36:08.938607 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:36:08.938621 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:36:08.938635 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:36:08.938653 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:36:08.938667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:36:08.938681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:36:08.938696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:36:08.938710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:36:08.938725 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:36:08.938740 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:36:08.938754 kernel: TSC deadline timer available Sep 12 17:36:08.938768 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:36:08.938785 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:36:08.938800 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:36:08.938814 kernel: Booting paravirtualized kernel on KVM Sep 12 17:36:08.938829 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:36:08.938844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:36:08.938859 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:36:08.938873 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:36:08.938887 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:36:08.938902 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:36:08.938938 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:36:08.938957 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:08.938973 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:36:08.938987 kernel: random: crng init done Sep 12 17:36:08.939001 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:36:08.939016 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:36:08.939030 kernel: Fallback order for Node 0: 0 Sep 12 17:36:08.939044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 12 17:36:08.939061 kernel: Policy zone: DMA32 Sep 12 17:36:08.939076 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:36:08.939092 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 162940K reserved, 0K cma-reserved) Sep 12 17:36:08.939105 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:36:08.939119 kernel: Kernel/User page tables isolation: enabled Sep 12 17:36:08.939135 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:36:08.939149 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:36:08.939162 kernel: Dynamic Preempt: voluntary Sep 12 17:36:08.939174 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:36:08.939193 kernel: rcu: RCU event tracing is enabled. Sep 12 17:36:08.939207 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:36:08.939220 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:36:08.939233 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:36:08.939246 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:36:08.939259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:36:08.939273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:36:08.939287 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:36:08.939314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:36:08.939329 kernel: Console: colour dummy device 80x25 Sep 12 17:36:08.939344 kernel: printk: console [tty0] enabled Sep 12 17:36:08.939359 kernel: printk: console [ttyS0] enabled Sep 12 17:36:08.939376 kernel: ACPI: Core revision 20230628 Sep 12 17:36:08.939392 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:36:08.939408 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:36:08.939424 kernel: x2apic enabled Sep 12 17:36:08.939439 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:36:08.939456 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 17:36:08.939475 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 12 17:36:08.939491 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:36:08.939507 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:36:08.939523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:36:08.939539 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:36:08.939554 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:36:08.939570 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:36:08.939587 kernel: RETBleed: Vulnerable Sep 12 17:36:08.939603 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:36:08.939621 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:36:08.939637 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:36:08.939652 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:36:08.939668 kernel: active return thunk: its_return_thunk Sep 12 17:36:08.939684 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:36:08.939700 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:36:08.939716 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:36:08.939732 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:36:08.939749 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:36:08.939764 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:36:08.939780 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:36:08.939799 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:36:08.939816 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:36:08.939832 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:36:08.939848 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:36:08.939864 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:36:08.939879 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:36:08.939894 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:36:08.939921 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:36:08.939936 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:36:08.939951 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:36:08.939966 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:36:08.939981 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:36:08.940000 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:36:08.940015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:36:08.940030 kernel: landlock: Up and running. Sep 12 17:36:08.940044 kernel: SELinux: Initializing. Sep 12 17:36:08.940059 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:36:08.940074 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:36:08.940089 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:36:08.940104 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:08.940119 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:08.940134 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:08.940152 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:36:08.940167 kernel: signal: max sigframe size: 3632 Sep 12 17:36:08.940182 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:36:08.940198 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:36:08.940212 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:36:08.940227 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:36:08.940242 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:36:08.940257 kernel: .... node #0, CPUs: #1 Sep 12 17:36:08.940273 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:36:08.940291 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:36:08.940306 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:36:08.940321 kernel: smpboot: Max logical packages: 1 Sep 12 17:36:08.940336 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 12 17:36:08.940351 kernel: devtmpfs: initialized Sep 12 17:36:08.940366 kernel: x86/mm: Memory block size: 128MB Sep 12 17:36:08.940381 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:36:08.940396 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:36:08.940414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:36:08.940429 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:36:08.940444 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:36:08.940459 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:36:08.940474 kernel: audit: type=2000 audit(1757698568.239:1): state=initialized audit_enabled=0 res=1 Sep 12 17:36:08.940489 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:36:08.940504 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:36:08.940519 kernel: cpuidle: using governor menu Sep 12 17:36:08.940534 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:36:08.940552 kernel: dca service started, version 1.12.1 Sep 12 17:36:08.940567 kernel: PCI: Using configuration type 1 for base access Sep 12 17:36:08.940582 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:36:08.940597 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:36:08.940612 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:36:08.940627 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:36:08.940643 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:36:08.940658 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:36:08.940673 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:36:08.940691 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:36:08.940705 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:36:08.940721 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:36:08.940735 kernel: ACPI: Interpreter enabled Sep 12 17:36:08.940751 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:36:08.940766 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:36:08.940781 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:36:08.940796 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:36:08.940811 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:36:08.940826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:36:08.941072 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:36:08.941211 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:36:08.941341 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:36:08.941365 kernel: acpiphp: Slot [3] registered Sep 12 17:36:08.941382 kernel: acpiphp: Slot [4] registered Sep 12 17:36:08.941409 kernel: acpiphp: Slot [5] registered Sep 12 17:36:08.941436 kernel: acpiphp: Slot [6] registered Sep 12 17:36:08.941461 kernel: acpiphp: Slot [7] registered Sep 12 17:36:08.941477 kernel: acpiphp: Slot [8] registered Sep 12 17:36:08.941493 kernel: acpiphp: Slot [9] registered Sep 12 17:36:08.941509 kernel: acpiphp: Slot [10] registered Sep 12 17:36:08.941526 kernel: acpiphp: Slot [11] registered Sep 12 17:36:08.941542 kernel: acpiphp: Slot [12] registered Sep 12 17:36:08.941558 kernel: acpiphp: Slot [13] registered Sep 12 17:36:08.941574 kernel: acpiphp: Slot [14] registered Sep 12 17:36:08.941590 kernel: acpiphp: Slot [15] registered Sep 12 17:36:08.941609 kernel: acpiphp: Slot [16] registered Sep 12 17:36:08.941625 kernel: acpiphp: Slot [17] registered Sep 12 17:36:08.941641 kernel: acpiphp: Slot [18] registered Sep 12 17:36:08.941658 kernel: acpiphp: Slot [19] registered Sep 12 17:36:08.941673 kernel: acpiphp: Slot [20] registered Sep 12 17:36:08.941689 kernel: acpiphp: Slot [21] registered Sep 12 17:36:08.941705 kernel: acpiphp: Slot [22] registered Sep 12 17:36:08.941721 kernel: acpiphp: Slot [23] registered Sep 12 17:36:08.941737 kernel: acpiphp: Slot [24] registered Sep 12 17:36:08.941753 kernel: acpiphp: Slot [25] registered Sep 12 17:36:08.941772 kernel: acpiphp: Slot [26] registered Sep 12 17:36:08.941788 kernel: acpiphp: Slot [27] registered Sep 12 17:36:08.941804 kernel: acpiphp: Slot [28] registered Sep 12 17:36:08.941820 kernel: acpiphp: Slot [29] registered Sep 12 17:36:08.941836 kernel: acpiphp: Slot [30] registered Sep 12 17:36:08.941852 kernel: acpiphp: Slot [31] registered Sep 12 17:36:08.941868 kernel: PCI host bridge to bus 0000:00 Sep 12 17:36:08.942880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:36:08.943039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:36:08.943696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:36:08.943856 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:36:08.944004 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:36:08.944126 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:36:08.944290 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:36:08.944436 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:36:08.944583 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 12 17:36:08.944715 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:36:08.944846 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:36:08.944993 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:36:08.945127 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:36:08.945256 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:36:08.945386 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:36:08.945522 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:36:08.945659 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 12 17:36:08.948582 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 12 17:36:08.948829 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:36:08.949231 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 12 17:36:08.949385 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:36:08.949543 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:36:08.949685 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 12 17:36:08.949839 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:36:08.950013 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 12 17:36:08.950032 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:36:08.950046 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:36:08.950061 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:36:08.950077 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:36:08.950097 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:36:08.950113 kernel: iommu: Default domain type: Translated Sep 12 17:36:08.950129 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:36:08.950145 kernel: efivars: Registered efivars operations Sep 12 17:36:08.950161 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:36:08.950177 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:36:08.950192 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:36:08.950208 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:36:08.950340 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:36:08.950477 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:36:08.950607 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:36:08.950626 kernel: vgaarb: loaded Sep 12 17:36:08.950642 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:36:08.950659 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:36:08.950674 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:36:08.950690 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:36:08.950707 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:36:08.950726 kernel: pnp: PnP ACPI init Sep 12 17:36:08.950742 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:36:08.950758 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:36:08.950773 kernel: NET: Registered PF_INET protocol family Sep 12 17:36:08.950789 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:36:08.950805 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:36:08.950820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:36:08.950835 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:36:08.950852 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:36:08.950870 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:36:08.950884 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:36:08.950898 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:36:08.952965 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:36:08.952986 kernel: NET: Registered PF_XDP protocol family Sep 12 17:36:08.953143 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:36:08.953266 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:36:08.953576 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:36:08.953725 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:36:08.953887 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:36:08.954065 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:36:08.954086 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:36:08.954102 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:36:08.954118 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 17:36:08.954133 kernel: clocksource: Switched to clocksource tsc Sep 12 17:36:08.954147 kernel: Initialise system trusted keyrings Sep 12 17:36:08.954162 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:36:08.954184 kernel: Key type asymmetric registered Sep 12 17:36:08.954198 kernel: Asymmetric key parser 'x509' registered Sep 12 17:36:08.954213 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:36:08.954228 kernel: io scheduler mq-deadline registered Sep 12 17:36:08.954243 kernel: io scheduler kyber registered Sep 12 17:36:08.954256 kernel: io scheduler bfq registered Sep 12 17:36:08.954272 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:36:08.954287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:36:08.954303 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:36:08.954324 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:36:08.954340 kernel: i8042: Warning: Keylock active Sep 12 17:36:08.954356 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:36:08.954372 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:36:08.954543 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:36:08.954674 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:36:08.954802 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:36:08 UTC (1757698568) Sep 12 17:36:08.954954 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:36:08.954980 kernel: intel_pstate: CPU model not supported Sep 12 17:36:08.954995 kernel: efifb: probing for efifb Sep 12 17:36:08.955012 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Sep 12 17:36:08.955027 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:36:08.955042 kernel: efifb: scrolling: redraw Sep 12 17:36:08.955059 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:36:08.955076 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:36:08.955093 kernel: fb0: EFI VGA frame buffer device Sep 12 17:36:08.955110 kernel: pstore: Using crash dump compression: deflate Sep 12 17:36:08.955130 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:36:08.955146 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:36:08.955161 kernel: Segment Routing with IPv6 Sep 12 17:36:08.955177 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:36:08.955193 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:36:08.955209 kernel: Key type dns_resolver registered Sep 12 17:36:08.955253 kernel: IPI shorthand broadcast: enabled Sep 12 17:36:08.955272 kernel: sched_clock: Marking stable (485001997, 138681189)->(714679444, -90996258) Sep 12 17:36:08.955290 kernel: registered taskstats version 1 Sep 12 17:36:08.955312 kernel: Loading compiled-in X.509 certificates Sep 12 17:36:08.955328 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:36:08.955345 kernel: Key type .fscrypt registered Sep 12 17:36:08.955362 kernel: Key type fscrypt-provisioning registered Sep 12 17:36:08.955380 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:36:08.955396 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:36:08.955412 kernel: ima: No architecture policies found Sep 12 17:36:08.955429 kernel: clk: Disabling unused clocks Sep 12 17:36:08.955450 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:36:08.955468 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:36:08.955487 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:36:08.955505 kernel: Run /init as init process Sep 12 17:36:08.955523 kernel: with arguments: Sep 12 17:36:08.955540 kernel: /init Sep 12 17:36:08.955556 kernel: with environment: Sep 12 17:36:08.955572 kernel: HOME=/ Sep 12 17:36:08.955590 kernel: TERM=linux Sep 12 17:36:08.955608 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:36:08.955636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:36:08.955656 systemd[1]: Detected virtualization amazon. Sep 12 17:36:08.955674 systemd[1]: Detected architecture x86-64. Sep 12 17:36:08.955692 systemd[1]: Running in initrd. Sep 12 17:36:08.955710 systemd[1]: No hostname configured, using default hostname. Sep 12 17:36:08.955728 systemd[1]: Hostname set to . Sep 12 17:36:08.955751 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:36:08.955771 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:36:08.955789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:08.955807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:08.955827 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:36:08.955847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:36:08.955867 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:36:08.955891 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:36:08.959959 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:36:08.959987 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:36:08.960007 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:08.960026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:08.960051 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:36:08.960067 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:36:08.960085 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:36:08.960104 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:36:08.960123 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:36:08.960142 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:36:08.960161 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:36:08.960179 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:36:08.960198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:08.960220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:08.960239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:08.960257 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:36:08.960276 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:36:08.960295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:36:08.960314 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:36:08.960332 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:36:08.960352 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:36:08.960374 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:36:08.960392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:08.960411 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:36:08.960474 systemd-journald[178]: Collecting audit messages is disabled. Sep 12 17:36:08.960526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:08.960548 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:36:08.960573 systemd-journald[178]: Journal started Sep 12 17:36:08.960623 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2fb3157340c322a532ab83683f6eed) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:36:08.967963 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:36:08.949174 systemd-modules-load[179]: Inserted module 'overlay' Sep 12 17:36:08.981979 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:36:08.982958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:08.988100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:09.000013 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:36:09.000309 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:36:09.004687 kernel: Bridge firewalling registered Sep 12 17:36:09.001924 systemd-modules-load[179]: Inserted module 'br_netfilter' Sep 12 17:36:09.008930 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:36:09.010643 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:09.011794 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:09.015065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:36:09.022422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:36:09.025083 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:09.031696 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:09.037170 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:36:09.039688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:09.050152 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:36:09.053170 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:09.067681 dracut-cmdline[212]: dracut-dracut-053 Sep 12 17:36:09.072356 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:09.102080 systemd-resolved[214]: Positive Trust Anchors: Sep 12 17:36:09.102096 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:36:09.102160 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:36:09.110462 systemd-resolved[214]: Defaulting to hostname 'linux'. Sep 12 17:36:09.113714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:36:09.114495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:09.169655 kernel: SCSI subsystem initialized Sep 12 17:36:09.191279 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:36:09.204934 kernel: iscsi: registered transport (tcp) Sep 12 17:36:09.228416 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:36:09.228494 kernel: QLogic iSCSI HBA Driver Sep 12 17:36:09.268836 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:36:09.274211 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:36:09.308117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:36:09.308188 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:36:09.308203 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:36:09.355976 kernel: raid6: avx512x4 gen() 17591 MB/s Sep 12 17:36:09.373967 kernel: raid6: avx512x2 gen() 17655 MB/s Sep 12 17:36:09.391967 kernel: raid6: avx512x1 gen() 17619 MB/s Sep 12 17:36:09.410006 kernel: raid6: avx2x4 gen() 16851 MB/s Sep 12 17:36:09.427970 kernel: raid6: avx2x2 gen() 17608 MB/s Sep 12 17:36:09.446192 kernel: raid6: avx2x1 gen() 13507 MB/s Sep 12 17:36:09.446256 kernel: raid6: using algorithm avx512x2 gen() 17655 MB/s Sep 12 17:36:09.465143 kernel: raid6: .... xor() 24534 MB/s, rmw enabled Sep 12 17:36:09.465219 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:36:09.486958 kernel: xor: automatically using best checksumming function avx Sep 12 17:36:09.657957 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:36:09.669246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:36:09.677133 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:09.690734 systemd-udevd[398]: Using default interface naming scheme 'v255'. Sep 12 17:36:09.695870 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:09.706061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:36:09.725587 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Sep 12 17:36:09.757122 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:36:09.763115 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:36:09.814547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:09.827136 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:36:09.846975 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:36:09.853957 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:36:09.855012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:09.856218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:36:09.864170 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:36:09.891583 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:36:09.918165 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:36:09.918440 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:36:09.922957 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:36:09.931937 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:36:09.949535 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:36:09.949636 kernel: AES CTR mode by8 optimization enabled Sep 12 17:36:09.950931 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:36:09.951112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:09.959304 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:41:c7:e7:d9:a1 Sep 12 17:36:09.958674 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:09.960726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:09.960983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:09.961601 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:09.961688 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:36:09.973324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:09.985722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:09.985896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:09.994090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:10.005165 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:36:10.005487 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:36:10.017994 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:36:10.025422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:10.032493 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:36:10.032565 kernel: GPT:9289727 != 16777215 Sep 12 17:36:10.032585 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:36:10.034151 kernel: GPT:9289727 != 16777215 Sep 12 17:36:10.034201 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:36:10.034220 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:36:10.038212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:10.058538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:10.137938 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (451) Sep 12 17:36:10.156893 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (446) Sep 12 17:36:10.194645 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:36:10.228863 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:36:10.235356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:36:10.236020 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:36:10.243513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:36:10.249237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:36:10.257272 disk-uuid[632]: Primary Header is updated. Sep 12 17:36:10.257272 disk-uuid[632]: Secondary Entries is updated. Sep 12 17:36:10.257272 disk-uuid[632]: Secondary Header is updated. Sep 12 17:36:10.266204 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:36:10.271934 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:36:11.282595 disk-uuid[633]: The operation has completed successfully. Sep 12 17:36:11.283299 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:36:11.442652 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:36:11.442787 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:36:11.460159 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:36:11.465641 sh[978]: Success Sep 12 17:36:11.486932 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:36:11.594767 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:36:11.602104 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:36:11.604968 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:36:11.634203 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:36:11.634268 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:11.636022 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:36:11.638774 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:36:11.638852 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:36:11.763966 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:36:11.788395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:36:11.789730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:36:11.796111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:36:11.800129 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:36:11.826945 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:11.827021 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:11.827050 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:36:11.843952 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:36:11.855945 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:36:11.858646 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:11.866184 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:36:11.874278 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:36:11.908338 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:36:11.914203 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:36:11.941556 systemd-networkd[1170]: lo: Link UP Sep 12 17:36:11.941570 systemd-networkd[1170]: lo: Gained carrier Sep 12 17:36:11.943426 systemd-networkd[1170]: Enumeration completed Sep 12 17:36:11.943547 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:36:11.944397 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:11.944403 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:36:11.946791 systemd[1]: Reached target network.target - Network. Sep 12 17:36:11.948080 systemd-networkd[1170]: eth0: Link UP Sep 12 17:36:11.948086 systemd-networkd[1170]: eth0: Gained carrier Sep 12 17:36:11.948099 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:11.957006 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.19.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:36:12.355234 ignition[1121]: Ignition 2.19.0 Sep 12 17:36:12.355246 ignition[1121]: Stage: fetch-offline Sep 12 17:36:12.355454 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:12.355462 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:12.357020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:36:12.355794 ignition[1121]: Ignition finished successfully Sep 12 17:36:12.363171 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:36:12.386940 ignition[1179]: Ignition 2.19.0 Sep 12 17:36:12.386955 ignition[1179]: Stage: fetch Sep 12 17:36:12.387423 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:12.387436 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:12.387556 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:12.395847 ignition[1179]: PUT result: OK Sep 12 17:36:12.397482 ignition[1179]: parsed url from cmdline: "" Sep 12 17:36:12.397494 ignition[1179]: no config URL provided Sep 12 17:36:12.397502 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:36:12.397514 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:36:12.397532 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:12.398078 ignition[1179]: PUT result: OK Sep 12 17:36:12.398128 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:36:12.400553 ignition[1179]: GET result: OK Sep 12 17:36:12.400730 ignition[1179]: parsing config with SHA512: bda75d0ff2f34a09db7c3a163dfefdf1225c7d4b20f174e5976efccc0c8870de6233cb9fb44ab65cba0409bf3ee571b49b151dcad0055ab54d6276a8353f83a6 Sep 12 17:36:12.406263 unknown[1179]: fetched base config from "system" Sep 12 17:36:12.406621 ignition[1179]: fetch: fetch complete Sep 12 17:36:12.406272 unknown[1179]: fetched base config from "system" Sep 12 17:36:12.406626 ignition[1179]: fetch: fetch passed Sep 12 17:36:12.406278 unknown[1179]: fetched user config from "aws" Sep 12 17:36:12.406664 ignition[1179]: Ignition finished successfully Sep 12 17:36:12.410624 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:36:12.416153 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:36:12.432839 ignition[1185]: Ignition 2.19.0 Sep 12 17:36:12.432853 ignition[1185]: Stage: kargs Sep 12 17:36:12.433365 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:12.433389 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:12.433524 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:12.435641 ignition[1185]: PUT result: OK Sep 12 17:36:12.438787 ignition[1185]: kargs: kargs passed Sep 12 17:36:12.438859 ignition[1185]: Ignition finished successfully Sep 12 17:36:12.440850 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:36:12.447131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:36:12.462347 ignition[1191]: Ignition 2.19.0 Sep 12 17:36:12.462361 ignition[1191]: Stage: disks Sep 12 17:36:12.462831 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:12.462847 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:12.462999 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:12.464727 ignition[1191]: PUT result: OK Sep 12 17:36:12.467533 ignition[1191]: disks: disks passed Sep 12 17:36:12.467615 ignition[1191]: Ignition finished successfully Sep 12 17:36:12.469450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:36:12.470586 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:36:12.471012 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:36:12.471551 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:36:12.472133 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:36:12.472758 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:36:12.478252 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:36:12.521245 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:36:12.524189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:36:12.529030 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:36:12.634935 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:36:12.635465 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:36:12.636677 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:36:12.655138 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:36:12.658184 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:36:12.660051 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:36:12.660122 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:36:12.660156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:36:12.672773 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:36:12.678426 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1218) Sep 12 17:36:12.679425 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:36:12.684703 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:12.684732 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:12.684745 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:36:12.700941 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:36:12.702671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:36:13.009556 initrd-setup-root[1242]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:36:13.025592 initrd-setup-root[1249]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:36:13.031170 initrd-setup-root[1256]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:36:13.030866 systemd-networkd[1170]: eth0: Gained IPv6LL Sep 12 17:36:13.035245 initrd-setup-root[1263]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:36:13.297419 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:36:13.303065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:36:13.306189 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:36:13.316226 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:36:13.318029 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:13.347121 ignition[1330]: INFO : Ignition 2.19.0 Sep 12 17:36:13.349669 ignition[1330]: INFO : Stage: mount Sep 12 17:36:13.349669 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:13.349669 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:13.349669 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:13.354612 ignition[1330]: INFO : PUT result: OK Sep 12 17:36:13.357056 ignition[1330]: INFO : mount: mount passed Sep 12 17:36:13.357056 ignition[1330]: INFO : Ignition finished successfully Sep 12 17:36:13.358604 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:36:13.359456 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:36:13.365051 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:36:13.642303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:36:13.663930 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1344) Sep 12 17:36:13.668120 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:13.668199 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:13.668223 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:36:13.674935 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:36:13.677335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:36:13.707968 ignition[1360]: INFO : Ignition 2.19.0 Sep 12 17:36:13.707968 ignition[1360]: INFO : Stage: files Sep 12 17:36:13.709499 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:13.709499 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:13.709499 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:13.711085 ignition[1360]: INFO : PUT result: OK Sep 12 17:36:13.712461 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:36:13.713343 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:36:13.714192 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:36:13.729047 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:36:13.730246 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:36:13.730246 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:36:13.729593 unknown[1360]: wrote ssh authorized keys file for user: core Sep 12 17:36:13.733226 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:36:13.733226 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:36:13.806201 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:36:14.223468 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:36:14.224827 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:36:14.233245 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:36:14.233245 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:36:14.233245 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:36:14.722695 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:36:15.202725 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:36:15.202725 ignition[1360]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:36:15.215947 ignition[1360]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:36:15.216903 ignition[1360]: INFO : files: files passed Sep 12 17:36:15.216903 ignition[1360]: INFO : Ignition finished successfully Sep 12 17:36:15.218088 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:36:15.228171 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:36:15.231273 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:36:15.235092 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:36:15.236045 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:36:15.295547 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:36:15.295547 initrd-setup-root-after-ignition[1390]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:36:15.298723 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:36:15.317659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:36:15.318649 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:36:15.323128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:36:15.351464 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:36:15.351606 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:36:15.352882 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:36:15.354194 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:36:15.355053 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:36:15.360186 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:36:15.375019 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:36:15.380133 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:36:15.393762 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:15.394554 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:15.395576 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:36:15.396469 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:36:15.396655 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:36:15.398036 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:36:15.398881 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:36:15.399705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:36:15.400508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:36:15.401319 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:36:15.402251 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:36:15.403038 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:36:15.403823 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:36:15.404974 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:36:15.405715 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:36:15.406556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:36:15.406743 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:36:15.407817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:15.408618 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:15.409295 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:36:15.409437 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:15.410200 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:36:15.410416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:36:15.411705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:36:15.411886 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:36:15.412599 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:36:15.412752 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:36:15.420194 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:36:15.423187 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:36:15.423864 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:36:15.424079 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:15.426510 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:36:15.426685 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:36:15.441339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:36:15.441487 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:36:15.446635 ignition[1414]: INFO : Ignition 2.19.0 Sep 12 17:36:15.446635 ignition[1414]: INFO : Stage: umount Sep 12 17:36:15.448545 ignition[1414]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:15.448545 ignition[1414]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:36:15.448545 ignition[1414]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:36:15.450557 ignition[1414]: INFO : PUT result: OK Sep 12 17:36:15.453329 ignition[1414]: INFO : umount: umount passed Sep 12 17:36:15.454114 ignition[1414]: INFO : Ignition finished successfully Sep 12 17:36:15.457844 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:36:15.458110 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:36:15.460561 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:36:15.460632 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:36:15.461221 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:36:15.461284 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:36:15.462428 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:36:15.462486 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:36:15.465033 systemd[1]: Stopped target network.target - Network. Sep 12 17:36:15.465817 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:36:15.465934 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:36:15.466411 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:36:15.466838 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:36:15.471081 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:15.471518 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:36:15.471851 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:36:15.472262 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:36:15.472323 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:36:15.473223 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:36:15.473281 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:36:15.474056 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:36:15.474138 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:36:15.474619 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:36:15.474760 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:36:15.475510 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:36:15.476175 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:36:15.478446 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:36:15.481736 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:36:15.481987 systemd-networkd[1170]: eth0: DHCPv6 lease lost Sep 12 17:36:15.483075 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:36:15.485424 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:36:15.486202 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:36:15.487686 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:36:15.487751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:15.494140 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:36:15.494594 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:36:15.494681 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:36:15.495673 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:36:15.495742 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:15.498662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:36:15.498729 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:15.499175 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:36:15.499237 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:15.500059 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:15.515845 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:36:15.516031 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:15.516795 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:36:15.516887 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:36:15.518160 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:36:15.518231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:15.519359 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:36:15.519417 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:15.520317 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:36:15.520390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:36:15.521561 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:36:15.521628 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:36:15.522968 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:36:15.523035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:15.529468 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:36:15.530131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:36:15.530224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:15.530840 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:36:15.530902 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:15.531963 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:36:15.532033 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:15.532886 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:15.533020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:15.551124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:36:15.551266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:36:15.625808 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:36:15.626338 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:36:15.627699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:36:15.628255 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:36:15.628339 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:36:15.642316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:36:15.651355 systemd[1]: Switching root. Sep 12 17:36:15.695370 systemd-journald[178]: Journal stopped Sep 12 17:36:17.462470 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Sep 12 17:36:17.462563 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:36:17.462593 kernel: SELinux: policy capability open_perms=1 Sep 12 17:36:17.462611 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:36:17.462639 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:36:17.462657 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:36:17.462675 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:36:17.462694 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:36:17.462711 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:36:17.462730 kernel: audit: type=1403 audit(1757698576.136:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:36:17.462752 systemd[1]: Successfully loaded SELinux policy in 65.820ms. Sep 12 17:36:17.462777 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.376ms. Sep 12 17:36:17.462797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:36:17.462817 systemd[1]: Detected virtualization amazon. Sep 12 17:36:17.462834 systemd[1]: Detected architecture x86-64. Sep 12 17:36:17.462852 systemd[1]: Detected first boot. Sep 12 17:36:17.462871 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:36:17.462898 zram_generator::config[1456]: No configuration found. Sep 12 17:36:17.462940 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:36:17.462958 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:36:17.462978 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:36:17.462998 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:36:17.463019 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:36:17.463039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:36:17.463057 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:36:17.463077 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:36:17.463097 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:36:17.463120 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:36:17.463141 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:36:17.463162 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:36:17.463183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:17.463204 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:17.463227 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:36:17.463253 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:36:17.463277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:36:17.463299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:36:17.463322 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:36:17.463342 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:17.463361 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:36:17.463380 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:36:17.463400 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:36:17.463420 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:36:17.463441 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:17.463467 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:36:17.463489 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:36:17.463509 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:36:17.463530 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:36:17.463550 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:36:17.463569 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:17.463591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:17.463612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:17.463634 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:36:17.463655 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:36:17.463680 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:36:17.463702 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:36:17.463723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:17.463745 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:36:17.463766 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:36:17.463788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:36:17.463810 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:36:17.463831 systemd[1]: Reached target machines.target - Containers. Sep 12 17:36:17.463858 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:36:17.463881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:17.463902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:36:17.463979 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:36:17.463997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:36:17.464014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:36:17.464034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:36:17.464054 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:36:17.464076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:36:17.464103 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:36:17.464123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:36:17.464145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:36:17.464166 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:36:17.464188 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:36:17.464208 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:36:17.464228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:36:17.464249 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:36:17.464275 kernel: fuse: init (API version 7.39) Sep 12 17:36:17.464297 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:36:17.464317 kernel: loop: module loaded Sep 12 17:36:17.464337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:36:17.464357 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:36:17.464378 systemd[1]: Stopped verity-setup.service. Sep 12 17:36:17.464400 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:17.464422 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:36:17.464443 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:36:17.464468 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:36:17.464488 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:36:17.464510 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:36:17.464531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:36:17.464552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:17.464576 kernel: ACPI: bus type drm_connector registered Sep 12 17:36:17.464594 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:36:17.464614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:36:17.464631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:36:17.464650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:36:17.464668 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:36:17.464695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:36:17.464715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:36:17.464745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:36:17.464771 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:36:17.464796 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:36:17.464817 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:36:17.464836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:36:17.464857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:17.464884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:36:17.464958 systemd-journald[1541]: Collecting audit messages is disabled. Sep 12 17:36:17.464999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:36:17.465019 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:36:17.465041 systemd-journald[1541]: Journal started Sep 12 17:36:17.465078 systemd-journald[1541]: Runtime Journal (/run/log/journal/ec2fb3157340c322a532ab83683f6eed) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:36:17.065981 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:36:17.130758 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:36:17.131202 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:36:17.478932 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:36:17.492932 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:36:17.501504 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:36:17.501602 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:36:17.506936 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:36:17.517959 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:36:17.532002 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:36:17.532105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:17.540929 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:36:17.546960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:36:17.554932 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:36:17.561938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:36:17.570954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:36:17.577936 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:36:17.588958 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:36:17.598374 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:36:17.601992 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:36:17.603220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:17.604130 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:36:17.605257 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:36:17.607704 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:36:17.626798 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:36:17.655043 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:36:17.663029 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:36:17.667314 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:36:17.678252 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:36:17.683578 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:36:17.684758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:17.702053 systemd-journald[1541]: Time spent on flushing to /var/log/journal/ec2fb3157340c322a532ab83683f6eed is 95.412ms for 992 entries. Sep 12 17:36:17.702053 systemd-journald[1541]: System Journal (/var/log/journal/ec2fb3157340c322a532ab83683f6eed) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:36:17.810487 systemd-journald[1541]: Received client request to flush runtime journal. Sep 12 17:36:17.810558 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:36:17.706420 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Sep 12 17:36:17.706443 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Sep 12 17:36:17.725950 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:17.736234 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:36:17.764395 udevadm[1596]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:36:17.813586 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:36:17.826959 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 17:36:17.830809 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:36:17.832866 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:36:17.845317 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:36:17.855167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:36:17.878253 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Sep 12 17:36:17.878712 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Sep 12 17:36:17.886060 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:17.951956 kernel: loop2: detected capacity change from 0 to 61336 Sep 12 17:36:18.009982 kernel: loop3: detected capacity change from 0 to 140768 Sep 12 17:36:18.138047 kernel: loop4: detected capacity change from 0 to 142488 Sep 12 17:36:18.163935 kernel: loop5: detected capacity change from 0 to 224512 Sep 12 17:36:18.207958 kernel: loop6: detected capacity change from 0 to 61336 Sep 12 17:36:18.232937 kernel: loop7: detected capacity change from 0 to 140768 Sep 12 17:36:18.257093 (sd-merge)[1614]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:36:18.257601 (sd-merge)[1614]: Merged extensions into '/usr'. Sep 12 17:36:18.265665 systemd[1]: Reloading requested from client PID 1567 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:36:18.265681 systemd[1]: Reloading... Sep 12 17:36:18.339962 zram_generator::config[1639]: No configuration found. Sep 12 17:36:18.538090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:36:18.631225 systemd[1]: Reloading finished in 364 ms. Sep 12 17:36:18.663142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:36:18.671176 systemd[1]: Starting ensure-sysext.service... Sep 12 17:36:18.683099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:36:18.685678 systemd[1]: Reloading requested from client PID 1691 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:36:18.685691 systemd[1]: Reloading... Sep 12 17:36:18.712252 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:36:18.712817 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:36:18.715510 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:36:18.716159 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Sep 12 17:36:18.716402 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Sep 12 17:36:18.739613 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:36:18.739635 systemd-tmpfiles[1692]: Skipping /boot Sep 12 17:36:18.759877 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:36:18.759897 systemd-tmpfiles[1692]: Skipping /boot Sep 12 17:36:18.793955 zram_generator::config[1715]: No configuration found. Sep 12 17:36:18.946218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:36:19.007649 systemd[1]: Reloading finished in 321 ms. Sep 12 17:36:19.024606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:36:19.029839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:19.036869 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:36:19.043358 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:36:19.047716 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:36:19.059538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:36:19.065138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:19.073122 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:36:19.087061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:19.087368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:19.094178 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:36:19.097232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:36:19.104224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:36:19.105034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:19.109114 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:36:19.109707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:19.118087 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:19.118461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:19.118796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:19.119414 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:19.132043 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:19.132453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:19.142272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:36:19.143121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:19.143387 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:36:19.144171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:19.146609 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:36:19.159847 systemd[1]: Finished ensure-sysext.service. Sep 12 17:36:19.183834 systemd-udevd[1779]: Using default interface naming scheme 'v255'. Sep 12 17:36:19.185052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:36:19.185277 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:36:19.190964 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:36:19.192079 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:36:19.193798 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:36:19.194036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:36:19.199010 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:36:19.215339 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:36:19.217834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:36:19.219778 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:36:19.221264 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:36:19.231040 augenrules[1807]: No rules Sep 12 17:36:19.233645 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:36:19.246801 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:36:19.280935 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:19.292129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:36:19.349008 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:36:19.350430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:36:19.376242 systemd-resolved[1777]: Positive Trust Anchors: Sep 12 17:36:19.376267 systemd-resolved[1777]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:36:19.376318 systemd-resolved[1777]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:36:19.386445 systemd-resolved[1777]: Defaulting to hostname 'linux'. Sep 12 17:36:19.389964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:36:19.391078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:19.438281 ldconfig[1563]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:36:19.442403 systemd-networkd[1820]: lo: Link UP Sep 12 17:36:19.442413 systemd-networkd[1820]: lo: Gained carrier Sep 12 17:36:19.443252 systemd-networkd[1820]: Enumeration completed Sep 12 17:36:19.443371 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:36:19.444745 systemd[1]: Reached target network.target - Network. Sep 12 17:36:19.455111 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:36:19.456177 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:36:19.458408 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:36:19.470796 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:36:19.478391 (udev-worker)[1829]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:36:19.508871 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:36:19.550390 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:19.550403 systemd-networkd[1820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:36:19.554034 systemd-networkd[1820]: eth0: Link UP Sep 12 17:36:19.554464 systemd-networkd[1820]: eth0: Gained carrier Sep 12 17:36:19.554977 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:19.565031 systemd-networkd[1820]: eth0: DHCPv4 address 172.31.19.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:36:19.577970 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 17:36:19.583018 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:36:19.583375 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:36:19.589931 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:36:19.593988 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 12 17:36:19.599942 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:36:19.621944 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1823) Sep 12 17:36:19.666359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:19.682580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:19.682819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:19.693251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:19.707978 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:36:19.826932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:36:19.827824 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:36:19.835137 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:36:19.843143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:36:19.867580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:19.870755 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:36:19.873693 lvm[1938]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:36:19.900922 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:36:19.901653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:19.902312 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:36:19.902885 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:36:19.903340 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:36:19.903885 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:36:19.904385 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:36:19.904739 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:36:19.905117 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:36:19.905146 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:36:19.905464 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:36:19.907356 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:36:19.909109 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:36:19.916233 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:36:19.918342 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:36:19.919353 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:36:19.919877 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:36:19.920302 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:36:19.920706 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:36:19.920737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:36:19.924042 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:36:19.928197 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:36:19.930079 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:36:19.931023 lvm[1948]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:36:19.933013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:36:19.936148 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:36:19.936569 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:36:19.939481 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:36:19.949088 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:36:19.953362 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:36:19.955640 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:36:19.959058 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:36:19.962213 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:36:19.975132 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:36:19.976091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:36:19.979157 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:36:19.991103 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:36:20.005285 jq[1952]: false Sep 12 17:36:20.028496 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:36:20.029211 extend-filesystems[1953]: Found loop4 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found loop5 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found loop6 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found loop7 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p1 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p2 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p3 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found usr Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p4 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p6 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p7 Sep 12 17:36:20.029211 extend-filesystems[1953]: Found nvme0n1p9 Sep 12 17:36:20.029211 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Sep 12 17:36:20.031261 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:36:20.042373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:36:20.043094 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:36:20.045742 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:36:20.046964 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:36:20.057469 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:36:20.057922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:36:20.088967 dbus-daemon[1951]: [system] SELinux support is enabled Sep 12 17:36:20.089121 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:36:20.092245 jq[1976]: true Sep 12 17:36:20.091594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:36:20.091620 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:36:20.092533 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:36:20.092967 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:36:20.100061 update_engine[1962]: I20250912 17:36:20.099596 1962 main.cc:92] Flatcar Update Engine starting Sep 12 17:36:20.110653 dbus-daemon[1951]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1820 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:36:20.112250 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Sep 12 17:36:20.113888 tar[1978]: linux-amd64/LICENSE Sep 12 17:36:20.113888 tar[1978]: linux-amd64/helm Sep 12 17:36:20.118858 extend-filesystems[1997]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:36:20.128727 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:36:20.118794 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:36:20.133105 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:36:20.134766 update_engine[1962]: I20250912 17:36:20.134399 1962 update_check_scheduler.cc:74] Next update check in 9m48s Sep 12 17:36:20.137244 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:36:20.148166 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:36:20.157670 jq[1995]: true Sep 12 17:36:20.159065 (ntainerd)[1989]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:36:20.159689 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:30:39 UTC 2025 (1): Starting Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: ---------------------------------------------------- Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: corporation. Support and training for ntp-4 are Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: available at https://www.nwtime.org/support Sep 12 17:36:20.160141 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: ---------------------------------------------------- Sep 12 17:36:20.159713 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:36:20.159721 ntpd[1955]: ---------------------------------------------------- Sep 12 17:36:20.159728 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:36:20.159735 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:36:20.159742 ntpd[1955]: corporation. Support and training for ntp-4 are Sep 12 17:36:20.159749 ntpd[1955]: available at https://www.nwtime.org/support Sep 12 17:36:20.159755 ntpd[1955]: ---------------------------------------------------- Sep 12 17:36:20.169446 ntpd[1955]: proto: precision = 0.056 usec (-24) Sep 12 17:36:20.172354 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: proto: precision = 0.056 usec (-24) Sep 12 17:36:20.172354 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: basedate set to 2025-08-31 Sep 12 17:36:20.172354 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: gps base set to 2025-08-31 (week 2382) Sep 12 17:36:20.170595 ntpd[1955]: basedate set to 2025-08-31 Sep 12 17:36:20.170613 ntpd[1955]: gps base set to 2025-08-31 (week 2382) Sep 12 17:36:20.177553 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:36:20.178048 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:36:20.178048 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:36:20.178048 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:36:20.178048 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Listen normally on 3 eth0 172.31.19.87:123 Sep 12 17:36:20.178048 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Listen normally on 4 lo [::1]:123 Sep 12 17:36:20.177608 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:36:20.177774 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:36:20.177803 ntpd[1955]: Listen normally on 3 eth0 172.31.19.87:123 Sep 12 17:36:20.177833 ntpd[1955]: Listen normally on 4 lo [::1]:123 Sep 12 17:36:20.179155 ntpd[1955]: bind(21) AF_INET6 fe80::441:c7ff:fee7:d9a1%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:36:20.182315 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: bind(21) AF_INET6 fe80::441:c7ff:fee7:d9a1%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:36:20.182315 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: unable to create socket on eth0 (5) for fe80::441:c7ff:fee7:d9a1%2#123 Sep 12 17:36:20.182315 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: failed to init interface for address fe80::441:c7ff:fee7:d9a1%2 Sep 12 17:36:20.182315 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Sep 12 17:36:20.179184 ntpd[1955]: unable to create socket on eth0 (5) for fe80::441:c7ff:fee7:d9a1%2#123 Sep 12 17:36:20.179195 ntpd[1955]: failed to init interface for address fe80::441:c7ff:fee7:d9a1%2 Sep 12 17:36:20.179226 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Sep 12 17:36:20.186091 coreos-metadata[1950]: Sep 12 17:36:20.184 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:36:20.187757 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:36:20.188273 systemd-logind[1960]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:36:20.189371 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:36:20.189371 ntpd[1955]: 12 Sep 17:36:20 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:36:20.187788 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:36:20.188291 systemd-logind[1960]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:36:20.188308 systemd-logind[1960]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:36:20.190222 systemd-logind[1960]: New seat seat0. Sep 12 17:36:20.193392 coreos-metadata[1950]: Sep 12 17:36:20.190 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:36:20.193392 coreos-metadata[1950]: Sep 12 17:36:20.191 INFO Fetch successful Sep 12 17:36:20.193392 coreos-metadata[1950]: Sep 12 17:36:20.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:36:20.194587 coreos-metadata[1950]: Sep 12 17:36:20.194 INFO Fetch successful Sep 12 17:36:20.194587 coreos-metadata[1950]: Sep 12 17:36:20.194 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:36:20.198445 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:36:20.199325 coreos-metadata[1950]: Sep 12 17:36:20.199 INFO Fetch successful Sep 12 17:36:20.199370 coreos-metadata[1950]: Sep 12 17:36:20.199 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:36:20.200578 coreos-metadata[1950]: Sep 12 17:36:20.200 INFO Fetch successful Sep 12 17:36:20.202716 coreos-metadata[1950]: Sep 12 17:36:20.202 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:36:20.204638 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:36:20.206947 coreos-metadata[1950]: Sep 12 17:36:20.206 INFO Fetch failed with 404: resource not found Sep 12 17:36:20.207031 coreos-metadata[1950]: Sep 12 17:36:20.206 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:36:20.209882 coreos-metadata[1950]: Sep 12 17:36:20.209 INFO Fetch successful Sep 12 17:36:20.209974 coreos-metadata[1950]: Sep 12 17:36:20.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:36:20.212923 coreos-metadata[1950]: Sep 12 17:36:20.212 INFO Fetch successful Sep 12 17:36:20.212923 coreos-metadata[1950]: Sep 12 17:36:20.212 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:36:20.216077 coreos-metadata[1950]: Sep 12 17:36:20.215 INFO Fetch successful Sep 12 17:36:20.216077 coreos-metadata[1950]: Sep 12 17:36:20.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:36:20.220826 coreos-metadata[1950]: Sep 12 17:36:20.220 INFO Fetch successful Sep 12 17:36:20.220826 coreos-metadata[1950]: Sep 12 17:36:20.220 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:36:20.220826 coreos-metadata[1950]: Sep 12 17:36:20.220 INFO Fetch successful Sep 12 17:36:20.250952 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:36:20.274226 extend-filesystems[1997]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:36:20.274226 extend-filesystems[1997]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:36:20.274226 extend-filesystems[1997]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:36:20.306049 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1826) Sep 12 17:36:20.306128 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:36:20.278335 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:36:20.278582 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:36:20.382396 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:36:20.387240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:36:20.396123 bash[2043]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:36:20.400224 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:36:20.412250 systemd[1]: Starting sshkeys.service... Sep 12 17:36:20.534625 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:36:20.546134 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:36:20.591266 locksmithd[2003]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:36:20.622370 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:36:20.623650 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:36:20.634945 dbus-daemon[1951]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:36:20.648486 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:36:20.674602 coreos-metadata[2120]: Sep 12 17:36:20.672 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:36:20.674602 coreos-metadata[2120]: Sep 12 17:36:20.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:36:20.678468 coreos-metadata[2120]: Sep 12 17:36:20.677 INFO Fetch successful Sep 12 17:36:20.678468 coreos-metadata[2120]: Sep 12 17:36:20.677 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:36:20.680397 coreos-metadata[2120]: Sep 12 17:36:20.679 INFO Fetch successful Sep 12 17:36:20.683778 polkitd[2136]: Started polkitd version 121 Sep 12 17:36:20.684193 unknown[2120]: wrote ssh authorized keys file for user: core Sep 12 17:36:20.724625 polkitd[2136]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:36:20.724712 polkitd[2136]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:36:20.734083 polkitd[2136]: Finished loading, compiling and executing 2 rules Sep 12 17:36:20.736754 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:36:20.736532 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:36:20.739007 polkitd[2136]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:36:20.772168 update-ssh-keys[2140]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:36:20.775346 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:36:20.780960 systemd[1]: Finished sshkeys.service. Sep 12 17:36:20.796417 systemd-resolved[1777]: System hostname changed to 'ip-172-31-19-87'. Sep 12 17:36:20.796419 systemd-hostnamed[2000]: Hostname set to (transient) Sep 12 17:36:20.856057 sshd_keygen[1973]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:36:20.934246 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:36:20.947042 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:36:20.969956 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:36:20.970192 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:36:20.981346 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:36:21.010593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:36:21.021210 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:36:21.030446 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:36:21.031378 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:36:21.033029 containerd[1989]: time="2025-09-12T17:36:21.032936009Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:36:21.080400 containerd[1989]: time="2025-09-12T17:36:21.080337186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.082654 containerd[1989]: time="2025-09-12T17:36:21.082602755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:21.082790 containerd[1989]: time="2025-09-12T17:36:21.082772417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:36:21.082876 containerd[1989]: time="2025-09-12T17:36:21.082861822Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:36:21.083148 containerd[1989]: time="2025-09-12T17:36:21.083126880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:36:21.083269 containerd[1989]: time="2025-09-12T17:36:21.083251431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083409 containerd[1989]: time="2025-09-12T17:36:21.083390001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083928 containerd[1989]: time="2025-09-12T17:36:21.083463001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083928 containerd[1989]: time="2025-09-12T17:36:21.083700197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083928 containerd[1989]: time="2025-09-12T17:36:21.083723387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083928 containerd[1989]: time="2025-09-12T17:36:21.083745046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083928 containerd[1989]: time="2025-09-12T17:36:21.083760787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.083928 containerd[1989]: time="2025-09-12T17:36:21.083858722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.084429 containerd[1989]: time="2025-09-12T17:36:21.084407277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:21.084683 containerd[1989]: time="2025-09-12T17:36:21.084659135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:21.084760 containerd[1989]: time="2025-09-12T17:36:21.084745585Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:36:21.084961 containerd[1989]: time="2025-09-12T17:36:21.084941773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:36:21.085114 containerd[1989]: time="2025-09-12T17:36:21.085077722Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:36:21.090896 containerd[1989]: time="2025-09-12T17:36:21.090806829Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:36:21.091528 containerd[1989]: time="2025-09-12T17:36:21.091150271Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:36:21.091528 containerd[1989]: time="2025-09-12T17:36:21.091185907Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:36:21.091528 containerd[1989]: time="2025-09-12T17:36:21.091209032Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:36:21.091528 containerd[1989]: time="2025-09-12T17:36:21.091233840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:36:21.091528 containerd[1989]: time="2025-09-12T17:36:21.091454805Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092409478Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092573194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092597407Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092619071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092640371Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092661514Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092680447Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092700870Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092721557Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092740608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092758462Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092777158Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092806472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.092929 containerd[1989]: time="2025-09-12T17:36:21.092827115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.092855853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.092876123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.092894825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093072943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093095739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093117256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093145779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093179906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093198940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093218405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093240616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093264034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093301779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093320766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.093455 containerd[1989]: time="2025-09-12T17:36:21.093349176Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093412830Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093439760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093457546Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093476402Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093493877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093512643Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093532478Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:36:21.094943 containerd[1989]: time="2025-09-12T17:36:21.093548049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:36:21.095278 containerd[1989]: time="2025-09-12T17:36:21.094015750Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:36:21.095278 containerd[1989]: time="2025-09-12T17:36:21.094108940Z" level=info msg="Connect containerd service" Sep 12 17:36:21.096018 containerd[1989]: time="2025-09-12T17:36:21.095984955Z" level=info msg="using legacy CRI server" Sep 12 17:36:21.096018 containerd[1989]: time="2025-09-12T17:36:21.096008762Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:36:21.096926 containerd[1989]: time="2025-09-12T17:36:21.096180471Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:36:21.099286 containerd[1989]: time="2025-09-12T17:36:21.099246012Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:36:21.099718 containerd[1989]: time="2025-09-12T17:36:21.099682527Z" level=info msg="Start subscribing containerd event" Sep 12 17:36:21.099863 containerd[1989]: time="2025-09-12T17:36:21.099845413Z" level=info msg="Start recovering state" Sep 12 17:36:21.100070 containerd[1989]: time="2025-09-12T17:36:21.100053448Z" level=info msg="Start event monitor" Sep 12 17:36:21.100248 containerd[1989]: time="2025-09-12T17:36:21.100232548Z" level=info msg="Start snapshots syncer" Sep 12 17:36:21.100422 containerd[1989]: time="2025-09-12T17:36:21.100407093Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:36:21.100491 containerd[1989]: time="2025-09-12T17:36:21.100478751Z" level=info msg="Start streaming server" Sep 12 17:36:21.100990 containerd[1989]: time="2025-09-12T17:36:21.100968109Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:36:21.101371 containerd[1989]: time="2025-09-12T17:36:21.101284760Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:36:21.101482 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:36:21.103215 containerd[1989]: time="2025-09-12T17:36:21.103190150Z" level=info msg="containerd successfully booted in 0.071321s" Sep 12 17:36:21.160144 ntpd[1955]: bind(24) AF_INET6 fe80::441:c7ff:fee7:d9a1%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:36:21.160660 ntpd[1955]: 12 Sep 17:36:21 ntpd[1955]: bind(24) AF_INET6 fe80::441:c7ff:fee7:d9a1%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:36:21.160733 ntpd[1955]: unable to create socket on eth0 (6) for fe80::441:c7ff:fee7:d9a1%2#123 Sep 12 17:36:21.160839 ntpd[1955]: 12 Sep 17:36:21 ntpd[1955]: unable to create socket on eth0 (6) for fe80::441:c7ff:fee7:d9a1%2#123 Sep 12 17:36:21.160889 ntpd[1955]: failed to init interface for address fe80::441:c7ff:fee7:d9a1%2 Sep 12 17:36:21.160975 ntpd[1955]: 12 Sep 17:36:21 ntpd[1955]: failed to init interface for address fe80::441:c7ff:fee7:d9a1%2 Sep 12 17:36:21.253042 tar[1978]: linux-amd64/README.md Sep 12 17:36:21.264899 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:36:21.606209 systemd-networkd[1820]: eth0: Gained IPv6LL Sep 12 17:36:21.609384 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:36:21.610808 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:36:21.615349 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:36:21.624778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:21.630042 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:36:21.680919 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:36:21.690753 amazon-ssm-agent[2177]: Initializing new seelog logger Sep 12 17:36:21.690753 amazon-ssm-agent[2177]: New Seelog Logger Creation Complete Sep 12 17:36:21.690753 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.690753 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.690753 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 processing appconfig overrides Sep 12 17:36:21.691359 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.691359 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.691359 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 processing appconfig overrides Sep 12 17:36:21.691635 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.691635 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.691727 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 processing appconfig overrides Sep 12 17:36:21.691841 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO Proxy environment variables: Sep 12 17:36:21.694987 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.694987 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:36:21.695126 amazon-ssm-agent[2177]: 2025/09/12 17:36:21 processing appconfig overrides Sep 12 17:36:21.792136 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO https_proxy: Sep 12 17:36:21.890448 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO http_proxy: Sep 12 17:36:21.989316 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO no_proxy: Sep 12 17:36:22.087362 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO Agent will take identity from EC2 Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:36:22.120306 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [Registrar] Starting registrar module Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:21 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:22 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:22 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:22 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:36:22.120566 amazon-ssm-agent[2177]: 2025-09-12 17:36:22 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:36:22.185187 amazon-ssm-agent[2177]: 2025-09-12 17:36:22 INFO [CredentialRefresher] Next credential rotation will be in 30.074994141183332 minutes Sep 12 17:36:23.110418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:36:23.117220 systemd[1]: Started sshd@0-172.31.19.87:22-147.75.109.163:56168.service - OpenSSH per-connection server daemon (147.75.109.163:56168). Sep 12 17:36:23.137793 amazon-ssm-agent[2177]: 2025-09-12 17:36:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:36:23.238670 amazon-ssm-agent[2177]: 2025-09-12 17:36:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2200) started Sep 12 17:36:23.308857 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 56168 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:23.311744 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:23.323989 systemd-logind[1960]: New session 1 of user core. Sep 12 17:36:23.324796 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:36:23.332403 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:36:23.339543 amazon-ssm-agent[2177]: 2025-09-12 17:36:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:36:23.351868 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:36:23.358946 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:36:23.364006 (systemd)[2213]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:36:23.485685 systemd[2213]: Queued start job for default target default.target. Sep 12 17:36:23.490079 systemd[2213]: Created slice app.slice - User Application Slice. Sep 12 17:36:23.490114 systemd[2213]: Reached target paths.target - Paths. Sep 12 17:36:23.490128 systemd[2213]: Reached target timers.target - Timers. Sep 12 17:36:23.491442 systemd[2213]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:36:23.510099 systemd[2213]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:36:23.510173 systemd[2213]: Reached target sockets.target - Sockets. Sep 12 17:36:23.510189 systemd[2213]: Reached target basic.target - Basic System. Sep 12 17:36:23.510235 systemd[2213]: Reached target default.target - Main User Target. Sep 12 17:36:23.510268 systemd[2213]: Startup finished in 138ms. Sep 12 17:36:23.510541 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:36:23.519358 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:36:23.672358 systemd[1]: Started sshd@1-172.31.19.87:22-147.75.109.163:56178.service - OpenSSH per-connection server daemon (147.75.109.163:56178). Sep 12 17:36:23.828464 sshd[2224]: Accepted publickey for core from 147.75.109.163 port 56178 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:23.830347 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:23.836356 systemd-logind[1960]: New session 2 of user core. Sep 12 17:36:23.842530 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:36:23.961681 sshd[2224]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:23.965024 systemd[1]: sshd@1-172.31.19.87:22-147.75.109.163:56178.service: Deactivated successfully. Sep 12 17:36:23.966855 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:36:23.968340 systemd-logind[1960]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:36:23.969524 systemd-logind[1960]: Removed session 2. Sep 12 17:36:24.003318 systemd[1]: Started sshd@2-172.31.19.87:22-147.75.109.163:56188.service - OpenSSH per-connection server daemon (147.75.109.163:56188). Sep 12 17:36:24.159464 sshd[2231]: Accepted publickey for core from 147.75.109.163 port 56188 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:24.160956 ntpd[1955]: 12 Sep 17:36:24 ntpd[1955]: Listen normally on 7 eth0 [fe80::441:c7ff:fee7:d9a1%2]:123 Sep 12 17:36:24.160654 ntpd[1955]: Listen normally on 7 eth0 [fe80::441:c7ff:fee7:d9a1%2]:123 Sep 12 17:36:24.162608 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:24.163108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:24.166298 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:36:24.167307 systemd[1]: Startup finished in 614ms (kernel) + 7.415s (initrd) + 8.093s (userspace) = 16.123s. Sep 12 17:36:24.178687 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:36:24.186690 systemd-logind[1960]: New session 3 of user core. Sep 12 17:36:24.191561 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:36:24.312884 sshd[2231]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:24.316203 systemd[1]: sshd@2-172.31.19.87:22-147.75.109.163:56188.service: Deactivated successfully. Sep 12 17:36:24.317712 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:36:24.319727 systemd-logind[1960]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:36:24.320967 systemd-logind[1960]: Removed session 3. Sep 12 17:36:25.367309 kubelet[2237]: E0912 17:36:25.367235 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:36:25.370278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:36:25.370439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:36:25.370735 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Sep 12 17:36:27.536255 systemd-resolved[1777]: Clock change detected. Flushing caches. Sep 12 17:36:34.725157 systemd[1]: Started sshd@3-172.31.19.87:22-147.75.109.163:51458.service - OpenSSH per-connection server daemon (147.75.109.163:51458). Sep 12 17:36:34.878425 sshd[2254]: Accepted publickey for core from 147.75.109.163 port 51458 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:34.879911 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:34.885011 systemd-logind[1960]: New session 4 of user core. Sep 12 17:36:34.893067 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:36:35.012704 sshd[2254]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:35.015895 systemd[1]: sshd@3-172.31.19.87:22-147.75.109.163:51458.service: Deactivated successfully. Sep 12 17:36:35.017573 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:36:35.019046 systemd-logind[1960]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:36:35.020585 systemd-logind[1960]: Removed session 4. Sep 12 17:36:35.050279 systemd[1]: Started sshd@4-172.31.19.87:22-147.75.109.163:51472.service - OpenSSH per-connection server daemon (147.75.109.163:51472). Sep 12 17:36:35.204172 sshd[2261]: Accepted publickey for core from 147.75.109.163 port 51472 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:35.205682 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:35.211192 systemd-logind[1960]: New session 5 of user core. Sep 12 17:36:35.225046 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:36:35.339179 sshd[2261]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:35.342911 systemd[1]: sshd@4-172.31.19.87:22-147.75.109.163:51472.service: Deactivated successfully. Sep 12 17:36:35.344973 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:36:35.346523 systemd-logind[1960]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:36:35.347997 systemd-logind[1960]: Removed session 5. Sep 12 17:36:35.375184 systemd[1]: Started sshd@5-172.31.19.87:22-147.75.109.163:51486.service - OpenSSH per-connection server daemon (147.75.109.163:51486). Sep 12 17:36:35.527543 sshd[2268]: Accepted publickey for core from 147.75.109.163 port 51486 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:35.528993 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:35.533834 systemd-logind[1960]: New session 6 of user core. Sep 12 17:36:35.541094 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:36:35.661557 sshd[2268]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:35.664629 systemd[1]: sshd@5-172.31.19.87:22-147.75.109.163:51486.service: Deactivated successfully. Sep 12 17:36:35.666500 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:36:35.667857 systemd-logind[1960]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:36:35.669023 systemd-logind[1960]: Removed session 6. Sep 12 17:36:35.693671 systemd[1]: Started sshd@6-172.31.19.87:22-147.75.109.163:51490.service - OpenSSH per-connection server daemon (147.75.109.163:51490). Sep 12 17:36:35.770820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:36:35.782130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:35.856911 sshd[2275]: Accepted publickey for core from 147.75.109.163 port 51490 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:35.858476 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:35.862943 systemd-logind[1960]: New session 7 of user core. Sep 12 17:36:35.868015 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:36:36.016675 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:36:36.018819 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:36.029418 sudo[2281]: pam_unix(sudo:session): session closed for user root Sep 12 17:36:36.052858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:36.054284 sshd[2275]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:36.060248 systemd-logind[1960]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:36:36.060938 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:36:36.061299 systemd[1]: sshd@6-172.31.19.87:22-147.75.109.163:51490.service: Deactivated successfully. Sep 12 17:36:36.064975 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:36:36.069574 systemd-logind[1960]: Removed session 7. Sep 12 17:36:36.094863 systemd[1]: Started sshd@7-172.31.19.87:22-147.75.109.163:51500.service - OpenSSH per-connection server daemon (147.75.109.163:51500). Sep 12 17:36:36.127012 kubelet[2288]: E0912 17:36:36.126973 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:36:36.131371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:36:36.131565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:36:36.252012 sshd[2297]: Accepted publickey for core from 147.75.109.163 port 51500 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:36.253532 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:36.259048 systemd-logind[1960]: New session 8 of user core. Sep 12 17:36:36.269021 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:36:36.365531 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:36:36.365947 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:36.369550 sudo[2302]: pam_unix(sudo:session): session closed for user root Sep 12 17:36:36.375261 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:36:36.375548 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:36.395212 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:36:36.397412 auditctl[2305]: No rules Sep 12 17:36:36.398001 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:36:36.398230 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:36:36.401247 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:36:36.441106 augenrules[2323]: No rules Sep 12 17:36:36.442699 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:36:36.444035 sudo[2301]: pam_unix(sudo:session): session closed for user root Sep 12 17:36:36.467395 sshd[2297]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:36.470490 systemd[1]: sshd@7-172.31.19.87:22-147.75.109.163:51500.service: Deactivated successfully. Sep 12 17:36:36.472065 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:36:36.473209 systemd-logind[1960]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:36:36.474327 systemd-logind[1960]: Removed session 8. Sep 12 17:36:36.502064 systemd[1]: Started sshd@8-172.31.19.87:22-147.75.109.163:51504.service - OpenSSH per-connection server daemon (147.75.109.163:51504). Sep 12 17:36:36.668051 sshd[2331]: Accepted publickey for core from 147.75.109.163 port 51504 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:36:36.669477 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:36.674433 systemd-logind[1960]: New session 9 of user core. Sep 12 17:36:36.681030 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:36:36.781305 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:36:36.781593 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:37.328183 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:36:37.330115 (dockerd)[2349]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:36:37.928067 dockerd[2349]: time="2025-09-12T17:36:37.928002367Z" level=info msg="Starting up" Sep 12 17:36:38.085629 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2528812978-merged.mount: Deactivated successfully. Sep 12 17:36:38.123841 dockerd[2349]: time="2025-09-12T17:36:38.123762994Z" level=info msg="Loading containers: start." Sep 12 17:36:38.282820 kernel: Initializing XFRM netlink socket Sep 12 17:36:38.325157 (udev-worker)[2371]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:36:38.379868 systemd-networkd[1820]: docker0: Link UP Sep 12 17:36:38.397344 dockerd[2349]: time="2025-09-12T17:36:38.397291114Z" level=info msg="Loading containers: done." Sep 12 17:36:38.423449 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2932157063-merged.mount: Deactivated successfully. Sep 12 17:36:38.429971 dockerd[2349]: time="2025-09-12T17:36:38.429902133Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:36:38.430131 dockerd[2349]: time="2025-09-12T17:36:38.430024156Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:36:38.430161 dockerd[2349]: time="2025-09-12T17:36:38.430128599Z" level=info msg="Daemon has completed initialization" Sep 12 17:36:38.465605 dockerd[2349]: time="2025-09-12T17:36:38.465412239Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:36:38.465513 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:36:39.845569 containerd[1989]: time="2025-09-12T17:36:39.845530199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:36:40.420046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2399919240.mount: Deactivated successfully. Sep 12 17:36:42.281268 containerd[1989]: time="2025-09-12T17:36:42.281213582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.283100 containerd[1989]: time="2025-09-12T17:36:42.283045085Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:36:42.287821 containerd[1989]: time="2025-09-12T17:36:42.284459511Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.291543 containerd[1989]: time="2025-09-12T17:36:42.291491729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:42.292775 containerd[1989]: time="2025-09-12T17:36:42.292732126Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.447158813s" Sep 12 17:36:42.292982 containerd[1989]: time="2025-09-12T17:36:42.292958155Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:36:42.293923 containerd[1989]: time="2025-09-12T17:36:42.293892642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:36:44.244118 containerd[1989]: time="2025-09-12T17:36:44.244057368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:44.245643 containerd[1989]: time="2025-09-12T17:36:44.245421163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:36:44.247412 containerd[1989]: time="2025-09-12T17:36:44.246963069Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:44.250564 containerd[1989]: time="2025-09-12T17:36:44.250526050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:44.251416 containerd[1989]: time="2025-09-12T17:36:44.251388302Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.957346399s" Sep 12 17:36:44.251529 containerd[1989]: time="2025-09-12T17:36:44.251514025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:36:44.252421 containerd[1989]: time="2025-09-12T17:36:44.252394734Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:36:45.791255 containerd[1989]: time="2025-09-12T17:36:45.791197676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:45.792599 containerd[1989]: time="2025-09-12T17:36:45.792389529Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:36:45.793865 containerd[1989]: time="2025-09-12T17:36:45.793821215Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:45.798018 containerd[1989]: time="2025-09-12T17:36:45.797957575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:45.799222 containerd[1989]: time="2025-09-12T17:36:45.799083385Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.546661564s" Sep 12 17:36:45.799222 containerd[1989]: time="2025-09-12T17:36:45.799119374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:36:45.800127 containerd[1989]: time="2025-09-12T17:36:45.800095317Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:36:46.271412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:36:46.278160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:46.659137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:46.670767 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:36:46.770642 kubelet[2565]: E0912 17:36:46.770373 2565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:36:46.774731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:36:46.774948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:36:47.015526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171468809.mount: Deactivated successfully. Sep 12 17:36:47.624145 containerd[1989]: time="2025-09-12T17:36:47.624074726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:47.625604 containerd[1989]: time="2025-09-12T17:36:47.625549317Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:36:47.627100 containerd[1989]: time="2025-09-12T17:36:47.626877737Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:47.629506 containerd[1989]: time="2025-09-12T17:36:47.629475473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:47.630148 containerd[1989]: time="2025-09-12T17:36:47.630115642Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.829987097s" Sep 12 17:36:47.630273 containerd[1989]: time="2025-09-12T17:36:47.630149641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:36:47.630772 containerd[1989]: time="2025-09-12T17:36:47.630680844Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:36:48.111897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524204612.mount: Deactivated successfully. Sep 12 17:36:49.086185 containerd[1989]: time="2025-09-12T17:36:49.086126276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.087469 containerd[1989]: time="2025-09-12T17:36:49.087237198Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:36:49.090816 containerd[1989]: time="2025-09-12T17:36:49.088817166Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.092477 containerd[1989]: time="2025-09-12T17:36:49.092440405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.093555 containerd[1989]: time="2025-09-12T17:36:49.093523483Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.46280898s" Sep 12 17:36:49.093660 containerd[1989]: time="2025-09-12T17:36:49.093645758Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:36:49.094518 containerd[1989]: time="2025-09-12T17:36:49.094490625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:36:49.761347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758786596.mount: Deactivated successfully. Sep 12 17:36:49.767808 containerd[1989]: time="2025-09-12T17:36:49.767757049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.769066 containerd[1989]: time="2025-09-12T17:36:49.768856835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:36:49.770430 containerd[1989]: time="2025-09-12T17:36:49.770378273Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.773111 containerd[1989]: time="2025-09-12T17:36:49.773055053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.774496 containerd[1989]: time="2025-09-12T17:36:49.773596898Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 679.077228ms" Sep 12 17:36:49.774496 containerd[1989]: time="2025-09-12T17:36:49.773628268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:36:49.774496 containerd[1989]: time="2025-09-12T17:36:49.774347819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:36:50.302659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840537684.mount: Deactivated successfully. Sep 12 17:36:51.188129 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:36:52.708830 containerd[1989]: time="2025-09-12T17:36:52.708763660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:52.711693 containerd[1989]: time="2025-09-12T17:36:52.711491549Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:36:52.713813 containerd[1989]: time="2025-09-12T17:36:52.712987733Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:52.718393 containerd[1989]: time="2025-09-12T17:36:52.718352973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:52.724262 containerd[1989]: time="2025-09-12T17:36:52.723932351Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.949554434s" Sep 12 17:36:52.724262 containerd[1989]: time="2025-09-12T17:36:52.723973132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:36:54.966826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:54.973151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:55.009024 systemd[1]: Reloading requested from client PID 2717 ('systemctl') (unit session-9.scope)... Sep 12 17:36:55.009043 systemd[1]: Reloading... Sep 12 17:36:55.158824 zram_generator::config[2760]: No configuration found. Sep 12 17:36:55.317295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:36:55.438151 systemd[1]: Reloading finished in 428 ms. Sep 12 17:36:55.498516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:55.503601 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:36:55.503885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:55.509190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:55.732785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:55.745273 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:36:55.813393 kubelet[2822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:36:55.813393 kubelet[2822]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:36:55.813393 kubelet[2822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:36:55.813393 kubelet[2822]: I0912 17:36:55.813357 2822 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:36:56.245712 kubelet[2822]: I0912 17:36:56.245643 2822 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:36:56.245712 kubelet[2822]: I0912 17:36:56.245698 2822 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:36:56.246641 kubelet[2822]: I0912 17:36:56.246433 2822 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:36:56.305992 kubelet[2822]: E0912 17:36:56.305933 2822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:56.307238 kubelet[2822]: I0912 17:36:56.307185 2822 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:36:56.328509 kubelet[2822]: E0912 17:36:56.327399 2822 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:36:56.328509 kubelet[2822]: I0912 17:36:56.327539 2822 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:36:56.332624 kubelet[2822]: I0912 17:36:56.332594 2822 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:36:56.335228 kubelet[2822]: I0912 17:36:56.335163 2822 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:36:56.335448 kubelet[2822]: I0912 17:36:56.335221 2822 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:36:56.339189 kubelet[2822]: I0912 17:36:56.339132 2822 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:36:56.339189 kubelet[2822]: I0912 17:36:56.339182 2822 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:36:56.340967 kubelet[2822]: I0912 17:36:56.340920 2822 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:36:56.347090 kubelet[2822]: I0912 17:36:56.346973 2822 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:36:56.347090 kubelet[2822]: I0912 17:36:56.347019 2822 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:36:56.347090 kubelet[2822]: I0912 17:36:56.347041 2822 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:36:56.347090 kubelet[2822]: I0912 17:36:56.347051 2822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:36:56.353068 kubelet[2822]: W0912 17:36:56.352648 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-87&limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:56.353068 kubelet[2822]: E0912 17:36:56.352713 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-87&limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:56.353259 kubelet[2822]: W0912 17:36:56.353074 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:56.353259 kubelet[2822]: E0912 17:36:56.353108 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:56.355648 kubelet[2822]: I0912 17:36:56.355483 2822 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:36:56.361265 kubelet[2822]: I0912 17:36:56.360268 2822 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:36:56.361265 kubelet[2822]: W0912 17:36:56.360355 2822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:36:56.361499 kubelet[2822]: I0912 17:36:56.361475 2822 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:36:56.361544 kubelet[2822]: I0912 17:36:56.361520 2822 server.go:1287] "Started kubelet" Sep 12 17:36:56.375839 kubelet[2822]: I0912 17:36:56.375809 2822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:36:56.379704 kubelet[2822]: E0912 17:36:56.377278 2822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-87.18649994326c2b50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-87,UID:ip-172-31-19-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-87,},FirstTimestamp:2025-09-12 17:36:56.361495376 +0000 UTC m=+0.611075862,LastTimestamp:2025-09-12 17:36:56.361495376 +0000 UTC m=+0.611075862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-87,}" Sep 12 17:36:56.386534 kubelet[2822]: I0912 17:36:56.386478 2822 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:36:56.387853 kubelet[2822]: I0912 17:36:56.387830 2822 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:36:56.392107 kubelet[2822]: I0912 17:36:56.391226 2822 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:36:56.392107 kubelet[2822]: E0912 17:36:56.391570 2822 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-87\" not found" Sep 12 17:36:56.393246 kubelet[2822]: I0912 17:36:56.393175 2822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:36:56.393574 kubelet[2822]: I0912 17:36:56.393541 2822 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:36:56.395336 kubelet[2822]: I0912 17:36:56.395292 2822 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:36:56.398994 kubelet[2822]: E0912 17:36:56.397983 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-87?timeout=10s\": dial tcp 172.31.19.87:6443: connect: connection refused" interval="200ms" Sep 12 17:36:56.398994 kubelet[2822]: I0912 17:36:56.398227 2822 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:36:56.398994 kubelet[2822]: I0912 17:36:56.398327 2822 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:36:56.398994 kubelet[2822]: W0912 17:36:56.398850 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:56.398994 kubelet[2822]: E0912 17:36:56.398920 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:56.399899 kubelet[2822]: I0912 17:36:56.399878 2822 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:36:56.400171 kubelet[2822]: I0912 17:36:56.400143 2822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:36:56.403833 kubelet[2822]: E0912 17:36:56.403719 2822 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:36:56.404295 kubelet[2822]: I0912 17:36:56.404276 2822 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:36:56.424106 kubelet[2822]: I0912 17:36:56.424035 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:36:56.428360 kubelet[2822]: I0912 17:36:56.428185 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:36:56.428360 kubelet[2822]: I0912 17:36:56.428237 2822 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:36:56.428360 kubelet[2822]: I0912 17:36:56.428261 2822 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:36:56.428622 kubelet[2822]: I0912 17:36:56.428597 2822 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:36:56.428692 kubelet[2822]: E0912 17:36:56.428672 2822 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:36:56.428914 kubelet[2822]: I0912 17:36:56.428895 2822 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:36:56.428914 kubelet[2822]: I0912 17:36:56.428912 2822 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:36:56.429027 kubelet[2822]: I0912 17:36:56.428946 2822 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:36:56.430860 kubelet[2822]: W0912 17:36:56.430243 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:56.430860 kubelet[2822]: E0912 17:36:56.430294 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:56.432637 kubelet[2822]: I0912 17:36:56.432611 2822 policy_none.go:49] "None policy: Start" Sep 12 17:36:56.432709 kubelet[2822]: I0912 17:36:56.432645 2822 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:36:56.432709 kubelet[2822]: I0912 17:36:56.432660 2822 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:36:56.442272 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:36:56.460736 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:36:56.464566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:36:56.478583 kubelet[2822]: I0912 17:36:56.478036 2822 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:36:56.478583 kubelet[2822]: I0912 17:36:56.478229 2822 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:36:56.478583 kubelet[2822]: I0912 17:36:56.478239 2822 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:36:56.478583 kubelet[2822]: I0912 17:36:56.478521 2822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:36:56.480230 kubelet[2822]: E0912 17:36:56.480207 2822 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:36:56.480392 kubelet[2822]: E0912 17:36:56.480366 2822 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-87\" not found" Sep 12 17:36:56.542230 systemd[1]: Created slice kubepods-burstable-podd843b5c07bd8595a16214a0aae1c8dc5.slice - libcontainer container kubepods-burstable-podd843b5c07bd8595a16214a0aae1c8dc5.slice. Sep 12 17:36:56.554895 kubelet[2822]: E0912 17:36:56.554861 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:56.556916 systemd[1]: Created slice kubepods-burstable-pod651a85fbb6eb0c1fb39f0a4394f3ded5.slice - libcontainer container kubepods-burstable-pod651a85fbb6eb0c1fb39f0a4394f3ded5.slice. Sep 12 17:36:56.566382 kubelet[2822]: E0912 17:36:56.566358 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:56.569073 systemd[1]: Created slice kubepods-burstable-poda0ed9d94fdbc4e3a694672dbbfe47f97.slice - libcontainer container kubepods-burstable-poda0ed9d94fdbc4e3a694672dbbfe47f97.slice. Sep 12 17:36:56.582471 kubelet[2822]: E0912 17:36:56.582201 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:56.583746 kubelet[2822]: I0912 17:36:56.583722 2822 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-87" Sep 12 17:36:56.584165 kubelet[2822]: E0912 17:36:56.584126 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.87:6443/api/v1/nodes\": dial tcp 172.31.19.87:6443: connect: connection refused" node="ip-172-31-19-87" Sep 12 17:36:56.599898 kubelet[2822]: E0912 17:36:56.599854 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-87?timeout=10s\": dial tcp 172.31.19.87:6443: connect: connection refused" interval="400ms" Sep 12 17:36:56.699814 kubelet[2822]: I0912 17:36:56.699495 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:36:56.699814 kubelet[2822]: I0912 17:36:56.699564 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/651a85fbb6eb0c1fb39f0a4394f3ded5-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-87\" (UID: \"651a85fbb6eb0c1fb39f0a4394f3ded5\") " pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:36:56.699814 kubelet[2822]: I0912 17:36:56.699598 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:36:56.699814 kubelet[2822]: I0912 17:36:56.699615 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:36:56.699814 kubelet[2822]: I0912 17:36:56.699631 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:36:56.700061 kubelet[2822]: I0912 17:36:56.699662 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0ed9d94fdbc4e3a694672dbbfe47f97-ca-certs\") pod \"kube-apiserver-ip-172-31-19-87\" (UID: \"a0ed9d94fdbc4e3a694672dbbfe47f97\") " pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:36:56.700061 kubelet[2822]: I0912 17:36:56.699701 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0ed9d94fdbc4e3a694672dbbfe47f97-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-87\" (UID: \"a0ed9d94fdbc4e3a694672dbbfe47f97\") " pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:36:56.700061 kubelet[2822]: I0912 17:36:56.699718 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0ed9d94fdbc4e3a694672dbbfe47f97-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-87\" (UID: \"a0ed9d94fdbc4e3a694672dbbfe47f97\") " pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:36:56.700061 kubelet[2822]: I0912 17:36:56.699748 2822 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:36:56.786682 kubelet[2822]: I0912 17:36:56.786314 2822 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-87" Sep 12 17:36:56.786682 kubelet[2822]: E0912 17:36:56.786652 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.87:6443/api/v1/nodes\": dial tcp 172.31.19.87:6443: connect: connection refused" node="ip-172-31-19-87" Sep 12 17:36:56.856859 containerd[1989]: time="2025-09-12T17:36:56.856695116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-87,Uid:d843b5c07bd8595a16214a0aae1c8dc5,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:56.875441 containerd[1989]: time="2025-09-12T17:36:56.875380557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-87,Uid:651a85fbb6eb0c1fb39f0a4394f3ded5,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:56.885014 containerd[1989]: time="2025-09-12T17:36:56.884895479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-87,Uid:a0ed9d94fdbc4e3a694672dbbfe47f97,Namespace:kube-system,Attempt:0,}" Sep 12 17:36:57.001349 kubelet[2822]: E0912 17:36:57.001273 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-87?timeout=10s\": dial tcp 172.31.19.87:6443: connect: connection refused" interval="800ms" Sep 12 17:36:57.184180 kubelet[2822]: W0912 17:36:57.184021 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:57.184180 kubelet[2822]: E0912 17:36:57.184090 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:57.188805 kubelet[2822]: I0912 17:36:57.188762 2822 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-87" Sep 12 17:36:57.189173 kubelet[2822]: E0912 17:36:57.189126 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.87:6443/api/v1/nodes\": dial tcp 172.31.19.87:6443: connect: connection refused" node="ip-172-31-19-87" Sep 12 17:36:57.263431 kubelet[2822]: W0912 17:36:57.263382 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:57.263431 kubelet[2822]: E0912 17:36:57.263428 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:57.324432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1418799600.mount: Deactivated successfully. Sep 12 17:36:57.334013 containerd[1989]: time="2025-09-12T17:36:57.333892989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:36:57.334988 containerd[1989]: time="2025-09-12T17:36:57.334950970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:36:57.336357 containerd[1989]: time="2025-09-12T17:36:57.336314146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:36:57.338207 containerd[1989]: time="2025-09-12T17:36:57.338071741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:36:57.339185 containerd[1989]: time="2025-09-12T17:36:57.339139444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:36:57.340643 containerd[1989]: time="2025-09-12T17:36:57.340606248Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:36:57.342810 containerd[1989]: time="2025-09-12T17:36:57.341450277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:36:57.344857 containerd[1989]: time="2025-09-12T17:36:57.344822472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:36:57.345677 containerd[1989]: time="2025-09-12T17:36:57.345640501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.668498ms" Sep 12 17:36:57.346841 containerd[1989]: time="2025-09-12T17:36:57.346810169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.03181ms" Sep 12 17:36:57.347586 containerd[1989]: time="2025-09-12T17:36:57.347547226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.071199ms" Sep 12 17:36:57.553576 containerd[1989]: time="2025-09-12T17:36:57.551932885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:57.553576 containerd[1989]: time="2025-09-12T17:36:57.553249108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:57.554481 containerd[1989]: time="2025-09-12T17:36:57.553965749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:57.554481 containerd[1989]: time="2025-09-12T17:36:57.554126302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:57.558042 containerd[1989]: time="2025-09-12T17:36:57.557658240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:57.558042 containerd[1989]: time="2025-09-12T17:36:57.557754502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:57.559000 containerd[1989]: time="2025-09-12T17:36:57.557779942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:57.559453 containerd[1989]: time="2025-09-12T17:36:57.559132922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:57.564441 containerd[1989]: time="2025-09-12T17:36:57.564148890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:57.564441 containerd[1989]: time="2025-09-12T17:36:57.564227322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:57.564441 containerd[1989]: time="2025-09-12T17:36:57.564250075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:57.564441 containerd[1989]: time="2025-09-12T17:36:57.564345477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:57.589039 systemd[1]: Started cri-containerd-b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf.scope - libcontainer container b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf. Sep 12 17:36:57.597578 systemd[1]: Started cri-containerd-9e5293305541a2046ce17fc5a270668dd819bc1c3ed489ca4443b27ee8184fe2.scope - libcontainer container 9e5293305541a2046ce17fc5a270668dd819bc1c3ed489ca4443b27ee8184fe2. Sep 12 17:36:57.628016 systemd[1]: Started cri-containerd-5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9.scope - libcontainer container 5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9. Sep 12 17:36:57.650779 kubelet[2822]: W0912 17:36:57.650191 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:57.650779 kubelet[2822]: E0912 17:36:57.650243 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:57.703201 containerd[1989]: time="2025-09-12T17:36:57.703145521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-87,Uid:d843b5c07bd8595a16214a0aae1c8dc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9\"" Sep 12 17:36:57.711997 containerd[1989]: time="2025-09-12T17:36:57.711918206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-87,Uid:a0ed9d94fdbc4e3a694672dbbfe47f97,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e5293305541a2046ce17fc5a270668dd819bc1c3ed489ca4443b27ee8184fe2\"" Sep 12 17:36:57.720910 containerd[1989]: time="2025-09-12T17:36:57.720722226Z" level=info msg="CreateContainer within sandbox \"5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:36:57.724975 containerd[1989]: time="2025-09-12T17:36:57.724007686Z" level=info msg="CreateContainer within sandbox \"9e5293305541a2046ce17fc5a270668dd819bc1c3ed489ca4443b27ee8184fe2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:36:57.735505 containerd[1989]: time="2025-09-12T17:36:57.735460758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-87,Uid:651a85fbb6eb0c1fb39f0a4394f3ded5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf\"" Sep 12 17:36:57.738924 containerd[1989]: time="2025-09-12T17:36:57.738882085Z" level=info msg="CreateContainer within sandbox \"b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:36:57.762440 containerd[1989]: time="2025-09-12T17:36:57.762262693Z" level=info msg="CreateContainer within sandbox \"9e5293305541a2046ce17fc5a270668dd819bc1c3ed489ca4443b27ee8184fe2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbaed0755b44f574fb9d046e6f3cbb7ce41557aadd23928525d0931ee006492d\"" Sep 12 17:36:57.763025 containerd[1989]: time="2025-09-12T17:36:57.762988874Z" level=info msg="StartContainer for \"fbaed0755b44f574fb9d046e6f3cbb7ce41557aadd23928525d0931ee006492d\"" Sep 12 17:36:57.768083 containerd[1989]: time="2025-09-12T17:36:57.768036019Z" level=info msg="CreateContainer within sandbox \"5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea\"" Sep 12 17:36:57.768870 containerd[1989]: time="2025-09-12T17:36:57.768846571Z" level=info msg="StartContainer for \"2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea\"" Sep 12 17:36:57.769638 containerd[1989]: time="2025-09-12T17:36:57.769602298Z" level=info msg="CreateContainer within sandbox \"b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd\"" Sep 12 17:36:57.770829 containerd[1989]: time="2025-09-12T17:36:57.770289830Z" level=info msg="StartContainer for \"a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd\"" Sep 12 17:36:57.796076 kubelet[2822]: W0912 17:36:57.795991 2822 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-87&limit=500&resourceVersion=0": dial tcp 172.31.19.87:6443: connect: connection refused Sep 12 17:36:57.796250 kubelet[2822]: E0912 17:36:57.796092 2822 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-87&limit=500&resourceVersion=0\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:57.802196 kubelet[2822]: E0912 17:36:57.802146 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-87?timeout=10s\": dial tcp 172.31.19.87:6443: connect: connection refused" interval="1.6s" Sep 12 17:36:57.813148 systemd[1]: Started cri-containerd-fbaed0755b44f574fb9d046e6f3cbb7ce41557aadd23928525d0931ee006492d.scope - libcontainer container fbaed0755b44f574fb9d046e6f3cbb7ce41557aadd23928525d0931ee006492d. Sep 12 17:36:57.832439 systemd[1]: Started cri-containerd-2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea.scope - libcontainer container 2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea. Sep 12 17:36:57.852103 systemd[1]: Started cri-containerd-a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd.scope - libcontainer container a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd. Sep 12 17:36:57.920716 containerd[1989]: time="2025-09-12T17:36:57.920568604Z" level=info msg="StartContainer for \"2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea\" returns successfully" Sep 12 17:36:57.935410 containerd[1989]: time="2025-09-12T17:36:57.935248564Z" level=info msg="StartContainer for \"fbaed0755b44f574fb9d046e6f3cbb7ce41557aadd23928525d0931ee006492d\" returns successfully" Sep 12 17:36:57.967581 containerd[1989]: time="2025-09-12T17:36:57.967528743Z" level=info msg="StartContainer for \"a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd\" returns successfully" Sep 12 17:36:57.992021 kubelet[2822]: I0912 17:36:57.991988 2822 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-87" Sep 12 17:36:57.992416 kubelet[2822]: E0912 17:36:57.992381 2822 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.87:6443/api/v1/nodes\": dial tcp 172.31.19.87:6443: connect: connection refused" node="ip-172-31-19-87" Sep 12 17:36:58.442560 kubelet[2822]: E0912 17:36:58.442456 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:58.444203 kubelet[2822]: E0912 17:36:58.444038 2822 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:36:58.444996 kubelet[2822]: E0912 17:36:58.444408 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:58.448060 kubelet[2822]: E0912 17:36:58.448036 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:59.449521 kubelet[2822]: E0912 17:36:59.449486 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:59.451754 kubelet[2822]: E0912 17:36:59.451724 2822 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:36:59.596428 kubelet[2822]: I0912 17:36:59.596397 2822 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-87" Sep 12 17:37:01.007959 kubelet[2822]: E0912 17:37:01.007862 2822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-87\" not found" node="ip-172-31-19-87" Sep 12 17:37:01.140926 kubelet[2822]: I0912 17:37:01.139038 2822 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-87" Sep 12 17:37:01.140926 kubelet[2822]: E0912 17:37:01.139085 2822 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-87\": node \"ip-172-31-19-87\" not found" Sep 12 17:37:01.192620 kubelet[2822]: I0912 17:37:01.192482 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:01.204078 kubelet[2822]: E0912 17:37:01.204047 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:01.206220 kubelet[2822]: I0912 17:37:01.206015 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:01.210919 kubelet[2822]: E0912 17:37:01.210610 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:01.210919 kubelet[2822]: I0912 17:37:01.210648 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:37:01.213772 kubelet[2822]: E0912 17:37:01.213611 2822 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:37:01.356353 kubelet[2822]: I0912 17:37:01.356308 2822 apiserver.go:52] "Watching apiserver" Sep 12 17:37:01.399716 kubelet[2822]: I0912 17:37:01.399323 2822 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:37:02.551784 kubelet[2822]: I0912 17:37:02.551750 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:02.869139 kubelet[2822]: I0912 17:37:02.868643 2822 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:37:04.711811 systemd[1]: Reloading requested from client PID 3099 ('systemctl') (unit session-9.scope)... Sep 12 17:37:04.711831 systemd[1]: Reloading... Sep 12 17:37:04.850843 zram_generator::config[3140]: No configuration found. Sep 12 17:37:05.033036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:05.181907 systemd[1]: Reloading finished in 469 ms. Sep 12 17:37:05.228773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:05.242645 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:37:05.242970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:05.243048 systemd[1]: kubelet.service: Consumed 1.098s CPU time, 132.9M memory peak, 0B memory swap peak. Sep 12 17:37:05.255467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:05.420406 update_engine[1962]: I20250912 17:37:05.419967 1962 update_attempter.cc:509] Updating boot flags... Sep 12 17:37:05.530851 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3208) Sep 12 17:37:05.653128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:05.681973 (kubelet)[3280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:37:05.820846 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3208) Sep 12 17:37:05.889079 kubelet[3280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:05.889551 kubelet[3280]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:37:05.891615 kubelet[3280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:05.891615 kubelet[3280]: I0912 17:37:05.890733 3280 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:37:05.906414 kubelet[3280]: I0912 17:37:05.905887 3280 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:37:05.906414 kubelet[3280]: I0912 17:37:05.905928 3280 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:37:05.906414 kubelet[3280]: I0912 17:37:05.906332 3280 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:37:05.912546 kubelet[3280]: I0912 17:37:05.911070 3280 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:37:05.938865 kubelet[3280]: I0912 17:37:05.938729 3280 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:37:05.977597 kubelet[3280]: E0912 17:37:05.977534 3280 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:37:05.977597 kubelet[3280]: I0912 17:37:05.977596 3280 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:37:05.990998 kubelet[3280]: I0912 17:37:05.990962 3280 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:37:05.991765 kubelet[3280]: I0912 17:37:05.991300 3280 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:37:05.991765 kubelet[3280]: I0912 17:37:05.991347 3280 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:37:05.991765 kubelet[3280]: I0912 17:37:05.991615 3280 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:37:05.991765 kubelet[3280]: I0912 17:37:05.991632 3280 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:37:05.999137 kubelet[3280]: I0912 17:37:05.999086 3280 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:06.000575 kubelet[3280]: I0912 17:37:06.000547 3280 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:37:06.015242 kubelet[3280]: I0912 17:37:06.015148 3280 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:37:06.016093 kubelet[3280]: I0912 17:37:06.016069 3280 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:37:06.018584 kubelet[3280]: I0912 17:37:06.016400 3280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:37:06.053862 kubelet[3280]: I0912 17:37:06.041015 3280 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:37:06.056745 kubelet[3280]: I0912 17:37:06.056712 3280 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:37:06.070096 kubelet[3280]: I0912 17:37:06.070064 3280 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:37:06.070227 kubelet[3280]: I0912 17:37:06.070117 3280 server.go:1287] "Started kubelet" Sep 12 17:37:06.073610 kubelet[3280]: I0912 17:37:06.073196 3280 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:37:06.099027 kubelet[3280]: I0912 17:37:06.096423 3280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:37:06.099027 kubelet[3280]: I0912 17:37:06.096985 3280 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:37:06.099027 kubelet[3280]: I0912 17:37:06.098180 3280 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:37:06.104399 kubelet[3280]: I0912 17:37:06.103159 3280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:37:06.114850 kubelet[3280]: I0912 17:37:06.108085 3280 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:37:06.115708 kubelet[3280]: I0912 17:37:06.115686 3280 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:37:06.132742 kubelet[3280]: I0912 17:37:06.131893 3280 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:37:06.132742 kubelet[3280]: I0912 17:37:06.132011 3280 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:37:06.136151 kubelet[3280]: I0912 17:37:06.118555 3280 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:37:06.136554 kubelet[3280]: I0912 17:37:06.136526 3280 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:37:06.155820 kubelet[3280]: I0912 17:37:06.153592 3280 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:37:06.192963 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3208) Sep 12 17:37:06.202301 kubelet[3280]: E0912 17:37:06.201827 3280 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:37:06.226206 kubelet[3280]: I0912 17:37:06.225413 3280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:37:06.229464 kubelet[3280]: I0912 17:37:06.229431 3280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:37:06.235722 kubelet[3280]: I0912 17:37:06.235691 3280 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:37:06.236038 kubelet[3280]: I0912 17:37:06.235936 3280 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:37:06.236038 kubelet[3280]: I0912 17:37:06.235954 3280 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:37:06.257213 kubelet[3280]: E0912 17:37:06.257159 3280 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:37:06.362098 kubelet[3280]: E0912 17:37:06.361878 3280 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:37:06.394885 kubelet[3280]: I0912 17:37:06.393583 3280 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:37:06.394885 kubelet[3280]: I0912 17:37:06.393604 3280 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:37:06.394885 kubelet[3280]: I0912 17:37:06.393626 3280 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:06.395675 kubelet[3280]: I0912 17:37:06.395358 3280 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:37:06.395675 kubelet[3280]: I0912 17:37:06.395381 3280 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:37:06.395675 kubelet[3280]: I0912 17:37:06.395407 3280 policy_none.go:49] "None policy: Start" Sep 12 17:37:06.395675 kubelet[3280]: I0912 17:37:06.395424 3280 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:37:06.395675 kubelet[3280]: I0912 17:37:06.395439 3280 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:37:06.395675 kubelet[3280]: I0912 17:37:06.395581 3280 state_mem.go:75] "Updated machine memory state" Sep 12 17:37:06.406654 kubelet[3280]: I0912 17:37:06.406627 3280 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:37:06.409330 kubelet[3280]: I0912 17:37:06.408521 3280 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:37:06.409330 kubelet[3280]: I0912 17:37:06.408541 3280 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:37:06.409330 kubelet[3280]: I0912 17:37:06.409133 3280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:37:06.416828 kubelet[3280]: E0912 17:37:06.414843 3280 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:37:06.551555 kubelet[3280]: I0912 17:37:06.549415 3280 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-87" Sep 12 17:37:06.565299 kubelet[3280]: I0912 17:37:06.565272 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:06.568642 kubelet[3280]: I0912 17:37:06.568617 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:37:06.571614 kubelet[3280]: I0912 17:37:06.569333 3280 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-87" Sep 12 17:37:06.571869 kubelet[3280]: I0912 17:37:06.571853 3280 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-87" Sep 12 17:37:06.572215 kubelet[3280]: I0912 17:37:06.569563 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:06.581299 kubelet[3280]: E0912 17:37:06.581269 3280 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-87\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:06.590994 kubelet[3280]: E0912 17:37:06.590889 3280 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-87\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:37:06.642502 kubelet[3280]: I0912 17:37:06.642447 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:06.642502 kubelet[3280]: I0912 17:37:06.642499 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/651a85fbb6eb0c1fb39f0a4394f3ded5-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-87\" (UID: \"651a85fbb6eb0c1fb39f0a4394f3ded5\") " pod="kube-system/kube-scheduler-ip-172-31-19-87" Sep 12 17:37:06.642502 kubelet[3280]: I0912 17:37:06.642529 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0ed9d94fdbc4e3a694672dbbfe47f97-ca-certs\") pod \"kube-apiserver-ip-172-31-19-87\" (UID: \"a0ed9d94fdbc4e3a694672dbbfe47f97\") " pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:06.642772 kubelet[3280]: I0912 17:37:06.642552 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0ed9d94fdbc4e3a694672dbbfe47f97-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-87\" (UID: \"a0ed9d94fdbc4e3a694672dbbfe47f97\") " pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:06.642772 kubelet[3280]: I0912 17:37:06.642579 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0ed9d94fdbc4e3a694672dbbfe47f97-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-87\" (UID: \"a0ed9d94fdbc4e3a694672dbbfe47f97\") " pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:06.642772 kubelet[3280]: I0912 17:37:06.642602 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:06.642772 kubelet[3280]: I0912 17:37:06.642623 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:06.642772 kubelet[3280]: I0912 17:37:06.642658 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:06.643066 kubelet[3280]: I0912 17:37:06.642683 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d843b5c07bd8595a16214a0aae1c8dc5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-87\" (UID: \"d843b5c07bd8595a16214a0aae1c8dc5\") " pod="kube-system/kube-controller-manager-ip-172-31-19-87" Sep 12 17:37:07.021921 kubelet[3280]: I0912 17:37:07.021877 3280 apiserver.go:52] "Watching apiserver" Sep 12 17:37:07.036389 kubelet[3280]: I0912 17:37:07.036348 3280 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:37:07.218065 kubelet[3280]: I0912 17:37:07.218001 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-87" podStartSLOduration=5.21798384 podStartE2EDuration="5.21798384s" podCreationTimestamp="2025-09-12 17:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:07.210005724 +0000 UTC m=+1.505799348" watchObservedRunningTime="2025-09-12 17:37:07.21798384 +0000 UTC m=+1.513777446" Sep 12 17:37:07.229334 kubelet[3280]: I0912 17:37:07.229048 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-87" podStartSLOduration=5.229029663 podStartE2EDuration="5.229029663s" podCreationTimestamp="2025-09-12 17:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:07.219659909 +0000 UTC m=+1.515453534" watchObservedRunningTime="2025-09-12 17:37:07.229029663 +0000 UTC m=+1.524823279" Sep 12 17:37:07.304902 kubelet[3280]: I0912 17:37:07.304182 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:07.310494 kubelet[3280]: E0912 17:37:07.310445 3280 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-87\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-87" Sep 12 17:37:07.313422 kubelet[3280]: I0912 17:37:07.313287 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-87" podStartSLOduration=1.313271251 podStartE2EDuration="1.313271251s" podCreationTimestamp="2025-09-12 17:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:07.229234091 +0000 UTC m=+1.525027696" watchObservedRunningTime="2025-09-12 17:37:07.313271251 +0000 UTC m=+1.609064852" Sep 12 17:37:09.481930 kubelet[3280]: I0912 17:37:09.481411 3280 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:37:09.482545 containerd[1989]: time="2025-09-12T17:37:09.481821234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:37:09.484293 kubelet[3280]: I0912 17:37:09.483938 3280 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:37:10.229507 systemd[1]: Created slice kubepods-besteffort-pod20d7098c_6054_4e30_b6b3_239ffbb98190.slice - libcontainer container kubepods-besteffort-pod20d7098c_6054_4e30_b6b3_239ffbb98190.slice. Sep 12 17:37:10.264825 kubelet[3280]: I0912 17:37:10.264771 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20d7098c-6054-4e30-b6b3-239ffbb98190-lib-modules\") pod \"kube-proxy-c8ch4\" (UID: \"20d7098c-6054-4e30-b6b3-239ffbb98190\") " pod="kube-system/kube-proxy-c8ch4" Sep 12 17:37:10.264825 kubelet[3280]: I0912 17:37:10.264823 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgtn\" (UniqueName: \"kubernetes.io/projected/20d7098c-6054-4e30-b6b3-239ffbb98190-kube-api-access-mhgtn\") pod \"kube-proxy-c8ch4\" (UID: \"20d7098c-6054-4e30-b6b3-239ffbb98190\") " pod="kube-system/kube-proxy-c8ch4" Sep 12 17:37:10.265003 kubelet[3280]: I0912 17:37:10.264846 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20d7098c-6054-4e30-b6b3-239ffbb98190-kube-proxy\") pod \"kube-proxy-c8ch4\" (UID: \"20d7098c-6054-4e30-b6b3-239ffbb98190\") " pod="kube-system/kube-proxy-c8ch4" Sep 12 17:37:10.265003 kubelet[3280]: I0912 17:37:10.264864 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20d7098c-6054-4e30-b6b3-239ffbb98190-xtables-lock\") pod \"kube-proxy-c8ch4\" (UID: \"20d7098c-6054-4e30-b6b3-239ffbb98190\") " pod="kube-system/kube-proxy-c8ch4" Sep 12 17:37:10.372524 kubelet[3280]: E0912 17:37:10.372484 3280 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:37:10.372524 kubelet[3280]: E0912 17:37:10.372524 3280 projected.go:194] Error preparing data for projected volume kube-api-access-mhgtn for pod kube-system/kube-proxy-c8ch4: configmap "kube-root-ca.crt" not found Sep 12 17:37:10.372708 kubelet[3280]: E0912 17:37:10.372598 3280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20d7098c-6054-4e30-b6b3-239ffbb98190-kube-api-access-mhgtn podName:20d7098c-6054-4e30-b6b3-239ffbb98190 nodeName:}" failed. No retries permitted until 2025-09-12 17:37:10.872578293 +0000 UTC m=+5.168371898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mhgtn" (UniqueName: "kubernetes.io/projected/20d7098c-6054-4e30-b6b3-239ffbb98190-kube-api-access-mhgtn") pod "kube-proxy-c8ch4" (UID: "20d7098c-6054-4e30-b6b3-239ffbb98190") : configmap "kube-root-ca.crt" not found Sep 12 17:37:10.652027 systemd[1]: Created slice kubepods-besteffort-pod424bca35_337e_4484_89f8_485a85547a8b.slice - libcontainer container kubepods-besteffort-pod424bca35_337e_4484_89f8_485a85547a8b.slice. Sep 12 17:37:10.667076 kubelet[3280]: I0912 17:37:10.666920 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/424bca35-337e-4484-89f8-485a85547a8b-var-lib-calico\") pod \"tigera-operator-755d956888-zc86w\" (UID: \"424bca35-337e-4484-89f8-485a85547a8b\") " pod="tigera-operator/tigera-operator-755d956888-zc86w" Sep 12 17:37:10.667076 kubelet[3280]: I0912 17:37:10.666974 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldsz\" (UniqueName: \"kubernetes.io/projected/424bca35-337e-4484-89f8-485a85547a8b-kube-api-access-vldsz\") pod \"tigera-operator-755d956888-zc86w\" (UID: \"424bca35-337e-4484-89f8-485a85547a8b\") " pod="tigera-operator/tigera-operator-755d956888-zc86w" Sep 12 17:37:10.960494 containerd[1989]: time="2025-09-12T17:37:10.960383558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-zc86w,Uid:424bca35-337e-4484-89f8-485a85547a8b,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:37:11.000622 containerd[1989]: time="2025-09-12T17:37:11.000501100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:11.001492 containerd[1989]: time="2025-09-12T17:37:11.001330136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:11.001492 containerd[1989]: time="2025-09-12T17:37:11.001358332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:11.001984 containerd[1989]: time="2025-09-12T17:37:11.001901662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:11.029999 systemd[1]: Started cri-containerd-c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951.scope - libcontainer container c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951. Sep 12 17:37:11.079959 containerd[1989]: time="2025-09-12T17:37:11.079805850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-zc86w,Uid:424bca35-337e-4484-89f8-485a85547a8b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951\"" Sep 12 17:37:11.084633 containerd[1989]: time="2025-09-12T17:37:11.084486367Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:37:11.140745 containerd[1989]: time="2025-09-12T17:37:11.140691195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8ch4,Uid:20d7098c-6054-4e30-b6b3-239ffbb98190,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:11.176813 containerd[1989]: time="2025-09-12T17:37:11.176556679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:11.176813 containerd[1989]: time="2025-09-12T17:37:11.176614096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:11.176813 containerd[1989]: time="2025-09-12T17:37:11.176630484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:11.176813 containerd[1989]: time="2025-09-12T17:37:11.176738028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:11.200041 systemd[1]: Started cri-containerd-a26200ba35f8c847456cc4e2d593a4bac564193f0752cef6ee18bc2d030569ba.scope - libcontainer container a26200ba35f8c847456cc4e2d593a4bac564193f0752cef6ee18bc2d030569ba. Sep 12 17:37:11.228300 containerd[1989]: time="2025-09-12T17:37:11.228058517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8ch4,Uid:20d7098c-6054-4e30-b6b3-239ffbb98190,Namespace:kube-system,Attempt:0,} returns sandbox id \"a26200ba35f8c847456cc4e2d593a4bac564193f0752cef6ee18bc2d030569ba\"" Sep 12 17:37:11.233664 containerd[1989]: time="2025-09-12T17:37:11.233608323Z" level=info msg="CreateContainer within sandbox \"a26200ba35f8c847456cc4e2d593a4bac564193f0752cef6ee18bc2d030569ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:37:11.268244 containerd[1989]: time="2025-09-12T17:37:11.268179154Z" level=info msg="CreateContainer within sandbox \"a26200ba35f8c847456cc4e2d593a4bac564193f0752cef6ee18bc2d030569ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb9240ad55376ee1e2b2f6b4a978262a124e5ceea888dde6a016340e0cef25cb\"" Sep 12 17:37:11.268976 containerd[1989]: time="2025-09-12T17:37:11.268927416Z" level=info msg="StartContainer for \"cb9240ad55376ee1e2b2f6b4a978262a124e5ceea888dde6a016340e0cef25cb\"" Sep 12 17:37:11.300018 systemd[1]: Started cri-containerd-cb9240ad55376ee1e2b2f6b4a978262a124e5ceea888dde6a016340e0cef25cb.scope - libcontainer container cb9240ad55376ee1e2b2f6b4a978262a124e5ceea888dde6a016340e0cef25cb. Sep 12 17:37:11.338815 containerd[1989]: time="2025-09-12T17:37:11.338727496Z" level=info msg="StartContainer for \"cb9240ad55376ee1e2b2f6b4a978262a124e5ceea888dde6a016340e0cef25cb\" returns successfully" Sep 12 17:37:12.687861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696028164.mount: Deactivated successfully. Sep 12 17:37:13.445430 containerd[1989]: time="2025-09-12T17:37:13.445373004Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:13.446978 containerd[1989]: time="2025-09-12T17:37:13.446764128Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:37:13.449415 containerd[1989]: time="2025-09-12T17:37:13.448218101Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:13.451718 containerd[1989]: time="2025-09-12T17:37:13.450724486Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:13.451718 containerd[1989]: time="2025-09-12T17:37:13.451565699Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.367036411s" Sep 12 17:37:13.451718 containerd[1989]: time="2025-09-12T17:37:13.451605093Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:37:13.454668 containerd[1989]: time="2025-09-12T17:37:13.454635866Z" level=info msg="CreateContainer within sandbox \"c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:37:13.473414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171504591.mount: Deactivated successfully. Sep 12 17:37:13.475755 containerd[1989]: time="2025-09-12T17:37:13.475709525Z" level=info msg="CreateContainer within sandbox \"c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9\"" Sep 12 17:37:13.476621 containerd[1989]: time="2025-09-12T17:37:13.476590466Z" level=info msg="StartContainer for \"323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9\"" Sep 12 17:37:13.518048 systemd[1]: Started cri-containerd-323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9.scope - libcontainer container 323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9. Sep 12 17:37:13.566818 containerd[1989]: time="2025-09-12T17:37:13.566736972Z" level=info msg="StartContainer for \"323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9\" returns successfully" Sep 12 17:37:14.354348 kubelet[3280]: I0912 17:37:14.354259 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8ch4" podStartSLOduration=4.351999553 podStartE2EDuration="4.351999553s" podCreationTimestamp="2025-09-12 17:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:12.331137374 +0000 UTC m=+6.626930998" watchObservedRunningTime="2025-09-12 17:37:14.351999553 +0000 UTC m=+8.647793199" Sep 12 17:37:16.285483 kubelet[3280]: I0912 17:37:16.285412 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-zc86w" podStartSLOduration=3.914893692 podStartE2EDuration="6.2853742s" podCreationTimestamp="2025-09-12 17:37:10 +0000 UTC" firstStartedPulling="2025-09-12 17:37:11.082328539 +0000 UTC m=+5.378122143" lastFinishedPulling="2025-09-12 17:37:13.452809044 +0000 UTC m=+7.748602651" observedRunningTime="2025-09-12 17:37:14.354577134 +0000 UTC m=+8.650370743" watchObservedRunningTime="2025-09-12 17:37:16.2853742 +0000 UTC m=+10.581167824" Sep 12 17:37:21.027685 sudo[2334]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:21.054358 sshd[2331]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:21.058768 systemd[1]: sshd@8-172.31.19.87:22-147.75.109.163:51504.service: Deactivated successfully. Sep 12 17:37:21.062525 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:37:21.062750 systemd[1]: session-9.scope: Consumed 4.485s CPU time, 141.8M memory peak, 0B memory swap peak. Sep 12 17:37:21.065557 systemd-logind[1960]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:37:21.069009 systemd-logind[1960]: Removed session 9. Sep 12 17:37:25.445438 systemd[1]: Created slice kubepods-besteffort-pod10edc681_c14c_4afa_8b9d_c3bc9bbbbd5e.slice - libcontainer container kubepods-besteffort-pod10edc681_c14c_4afa_8b9d_c3bc9bbbbd5e.slice. Sep 12 17:37:25.498820 kubelet[3280]: I0912 17:37:25.498474 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e-tigera-ca-bundle\") pod \"calico-typha-5bb754bcd7-t7q6m\" (UID: \"10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e\") " pod="calico-system/calico-typha-5bb754bcd7-t7q6m" Sep 12 17:37:25.503437 kubelet[3280]: I0912 17:37:25.501843 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e-typha-certs\") pod \"calico-typha-5bb754bcd7-t7q6m\" (UID: \"10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e\") " pod="calico-system/calico-typha-5bb754bcd7-t7q6m" Sep 12 17:37:25.503437 kubelet[3280]: I0912 17:37:25.501903 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f7sp\" (UniqueName: \"kubernetes.io/projected/10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e-kube-api-access-5f7sp\") pod \"calico-typha-5bb754bcd7-t7q6m\" (UID: \"10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e\") " pod="calico-system/calico-typha-5bb754bcd7-t7q6m" Sep 12 17:37:25.754921 containerd[1989]: time="2025-09-12T17:37:25.754453752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb754bcd7-t7q6m,Uid:10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:25.819233 containerd[1989]: time="2025-09-12T17:37:25.818897707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:25.819233 containerd[1989]: time="2025-09-12T17:37:25.818978181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:25.819233 containerd[1989]: time="2025-09-12T17:37:25.819015855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:25.819477 containerd[1989]: time="2025-09-12T17:37:25.819153092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:25.909193 kubelet[3280]: I0912 17:37:25.909153 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-policysync\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909335 kubelet[3280]: I0912 17:37:25.909201 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-lib-modules\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909335 kubelet[3280]: I0912 17:37:25.909224 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-tigera-ca-bundle\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909335 kubelet[3280]: I0912 17:37:25.909250 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-cni-net-dir\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909335 kubelet[3280]: I0912 17:37:25.909271 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t57xj\" (UniqueName: \"kubernetes.io/projected/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-kube-api-access-t57xj\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909335 kubelet[3280]: I0912 17:37:25.909300 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-xtables-lock\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909552 kubelet[3280]: I0912 17:37:25.909321 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-var-run-calico\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909552 kubelet[3280]: I0912 17:37:25.909349 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-flexvol-driver-host\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909552 kubelet[3280]: I0912 17:37:25.909374 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-var-lib-calico\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909552 kubelet[3280]: I0912 17:37:25.909399 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-node-certs\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909552 kubelet[3280]: I0912 17:37:25.909427 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-cni-bin-dir\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.909757 kubelet[3280]: I0912 17:37:25.909452 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e-cni-log-dir\") pod \"calico-node-mgkrd\" (UID: \"ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e\") " pod="calico-system/calico-node-mgkrd" Sep 12 17:37:25.928814 systemd[1]: Started cri-containerd-31d7eb818c3fab70ad5880082b3e54878d8cffa2fbcdedc09df13ea3c007aac2.scope - libcontainer container 31d7eb818c3fab70ad5880082b3e54878d8cffa2fbcdedc09df13ea3c007aac2. Sep 12 17:37:25.929925 systemd[1]: Created slice kubepods-besteffort-podae8e1b6f_3400_4d6d_b9d9_8408a4bb454e.slice - libcontainer container kubepods-besteffort-podae8e1b6f_3400_4d6d_b9d9_8408a4bb454e.slice. Sep 12 17:37:26.013209 kubelet[3280]: E0912 17:37:26.013103 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.014235 kubelet[3280]: W0912 17:37:26.013135 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.015590 kubelet[3280]: E0912 17:37:26.015551 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.015590 kubelet[3280]: W0912 17:37:26.015586 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.015755 kubelet[3280]: E0912 17:37:26.015614 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.015818 kubelet[3280]: E0912 17:37:26.015772 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.017813 kubelet[3280]: E0912 17:37:26.016151 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.017813 kubelet[3280]: W0912 17:37:26.016168 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.017813 kubelet[3280]: E0912 17:37:26.016187 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.017813 kubelet[3280]: E0912 17:37:26.016871 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.017813 kubelet[3280]: W0912 17:37:26.016885 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.017813 kubelet[3280]: E0912 17:37:26.017009 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.018271 kubelet[3280]: E0912 17:37:26.018250 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.018471 kubelet[3280]: W0912 17:37:26.018263 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.018537 kubelet[3280]: E0912 17:37:26.018479 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.022918 kubelet[3280]: E0912 17:37:26.022896 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.023072 kubelet[3280]: W0912 17:37:26.023053 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.023167 kubelet[3280]: E0912 17:37:26.023154 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.033397 kubelet[3280]: E0912 17:37:26.033367 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.033537 kubelet[3280]: W0912 17:37:26.033392 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.033537 kubelet[3280]: E0912 17:37:26.033430 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.084164 containerd[1989]: time="2025-09-12T17:37:26.084077580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb754bcd7-t7q6m,Uid:10edc681-c14c-4afa-8b9d-c3bc9bbbbd5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"31d7eb818c3fab70ad5880082b3e54878d8cffa2fbcdedc09df13ea3c007aac2\"" Sep 12 17:37:26.088655 containerd[1989]: time="2025-09-12T17:37:26.088614634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:37:26.135817 kubelet[3280]: E0912 17:37:26.133022 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:26.192828 kubelet[3280]: E0912 17:37:26.192778 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.192828 kubelet[3280]: W0912 17:37:26.192825 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.193567 kubelet[3280]: E0912 17:37:26.192853 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.193891 kubelet[3280]: E0912 17:37:26.193873 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.194000 kubelet[3280]: W0912 17:37:26.193892 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.194000 kubelet[3280]: E0912 17:37:26.193916 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.194213 kubelet[3280]: E0912 17:37:26.194167 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.194213 kubelet[3280]: W0912 17:37:26.194180 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.194213 kubelet[3280]: E0912 17:37:26.194195 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.194592 kubelet[3280]: E0912 17:37:26.194549 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.194592 kubelet[3280]: W0912 17:37:26.194562 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.194592 kubelet[3280]: E0912 17:37:26.194579 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.195038 kubelet[3280]: E0912 17:37:26.194991 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.195038 kubelet[3280]: W0912 17:37:26.195008 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.195038 kubelet[3280]: E0912 17:37:26.195024 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.195993 kubelet[3280]: E0912 17:37:26.195408 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.195993 kubelet[3280]: W0912 17:37:26.195421 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.195993 kubelet[3280]: E0912 17:37:26.195434 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.197943 kubelet[3280]: E0912 17:37:26.197922 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.197943 kubelet[3280]: W0912 17:37:26.197943 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.198145 kubelet[3280]: E0912 17:37:26.197962 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.198279 kubelet[3280]: E0912 17:37:26.198245 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.198279 kubelet[3280]: W0912 17:37:26.198261 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.198279 kubelet[3280]: E0912 17:37:26.198276 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.198812 kubelet[3280]: E0912 17:37:26.198526 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.198812 kubelet[3280]: W0912 17:37:26.198538 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.198812 kubelet[3280]: E0912 17:37:26.198552 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.198812 kubelet[3280]: E0912 17:37:26.198766 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.198812 kubelet[3280]: W0912 17:37:26.198776 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.198812 kubelet[3280]: E0912 17:37:26.198800 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.199123 kubelet[3280]: E0912 17:37:26.199015 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.199123 kubelet[3280]: W0912 17:37:26.199025 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.199123 kubelet[3280]: E0912 17:37:26.199037 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.199261 kubelet[3280]: E0912 17:37:26.199244 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.199261 kubelet[3280]: W0912 17:37:26.199252 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.199351 kubelet[3280]: E0912 17:37:26.199263 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.199626 kubelet[3280]: E0912 17:37:26.199480 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.199626 kubelet[3280]: W0912 17:37:26.199492 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.199626 kubelet[3280]: E0912 17:37:26.199504 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.199772 kubelet[3280]: E0912 17:37:26.199697 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.199772 kubelet[3280]: W0912 17:37:26.199707 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.199772 kubelet[3280]: E0912 17:37:26.199719 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.200099 kubelet[3280]: E0912 17:37:26.199944 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.200099 kubelet[3280]: W0912 17:37:26.199957 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.200099 kubelet[3280]: E0912 17:37:26.199969 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.200241 kubelet[3280]: E0912 17:37:26.200180 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.200241 kubelet[3280]: W0912 17:37:26.200190 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.200241 kubelet[3280]: E0912 17:37:26.200202 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.200670 kubelet[3280]: E0912 17:37:26.200422 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.200670 kubelet[3280]: W0912 17:37:26.200432 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.200670 kubelet[3280]: E0912 17:37:26.200443 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.200670 kubelet[3280]: E0912 17:37:26.200653 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.200670 kubelet[3280]: W0912 17:37:26.200663 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.201968 kubelet[3280]: E0912 17:37:26.200674 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.201968 kubelet[3280]: E0912 17:37:26.200901 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.201968 kubelet[3280]: W0912 17:37:26.200910 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.201968 kubelet[3280]: E0912 17:37:26.200921 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.201968 kubelet[3280]: E0912 17:37:26.201132 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.201968 kubelet[3280]: W0912 17:37:26.201141 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.201968 kubelet[3280]: E0912 17:37:26.201154 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.214662 kubelet[3280]: E0912 17:37:26.214628 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.214662 kubelet[3280]: W0912 17:37:26.214658 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.214854 kubelet[3280]: E0912 17:37:26.214686 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.214854 kubelet[3280]: I0912 17:37:26.214720 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47dd082d-8313-4b06-a25a-46c1ffeb1afd-kubelet-dir\") pod \"csi-node-driver-hnq78\" (UID: \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\") " pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:26.215143 kubelet[3280]: E0912 17:37:26.215120 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.215143 kubelet[3280]: W0912 17:37:26.215144 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.215284 kubelet[3280]: E0912 17:37:26.215176 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.215284 kubelet[3280]: I0912 17:37:26.215204 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47dd082d-8313-4b06-a25a-46c1ffeb1afd-registration-dir\") pod \"csi-node-driver-hnq78\" (UID: \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\") " pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:26.215503 kubelet[3280]: E0912 17:37:26.215484 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.215503 kubelet[3280]: W0912 17:37:26.215501 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.215631 kubelet[3280]: E0912 17:37:26.215533 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.215867 kubelet[3280]: E0912 17:37:26.215833 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.215867 kubelet[3280]: W0912 17:37:26.215847 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.216185 kubelet[3280]: E0912 17:37:26.215874 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.216185 kubelet[3280]: E0912 17:37:26.216115 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.216185 kubelet[3280]: W0912 17:37:26.216125 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.216756 kubelet[3280]: E0912 17:37:26.216353 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.216756 kubelet[3280]: I0912 17:37:26.216384 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47dd082d-8313-4b06-a25a-46c1ffeb1afd-socket-dir\") pod \"csi-node-driver-hnq78\" (UID: \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\") " pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:26.216756 kubelet[3280]: E0912 17:37:26.216653 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.228968 kubelet[3280]: W0912 17:37:26.216663 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.229749 kubelet[3280]: E0912 17:37:26.229455 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.230169 kubelet[3280]: E0912 17:37:26.230039 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.230169 kubelet[3280]: W0912 17:37:26.230060 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.230274 kubelet[3280]: E0912 17:37:26.230183 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.231057 kubelet[3280]: E0912 17:37:26.230816 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.231057 kubelet[3280]: W0912 17:37:26.230833 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.231057 kubelet[3280]: E0912 17:37:26.230852 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.231266 kubelet[3280]: I0912 17:37:26.231232 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/47dd082d-8313-4b06-a25a-46c1ffeb1afd-varrun\") pod \"csi-node-driver-hnq78\" (UID: \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\") " pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:26.231480 kubelet[3280]: E0912 17:37:26.231348 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.231480 kubelet[3280]: W0912 17:37:26.231362 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.231480 kubelet[3280]: E0912 17:37:26.231380 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.231992 kubelet[3280]: E0912 17:37:26.231979 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.232076 kubelet[3280]: W0912 17:37:26.231992 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.232129 kubelet[3280]: E0912 17:37:26.232104 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.232636 kubelet[3280]: I0912 17:37:26.232137 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqkmw\" (UniqueName: \"kubernetes.io/projected/47dd082d-8313-4b06-a25a-46c1ffeb1afd-kube-api-access-jqkmw\") pod \"csi-node-driver-hnq78\" (UID: \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\") " pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:26.232636 kubelet[3280]: E0912 17:37:26.232575 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.232636 kubelet[3280]: W0912 17:37:26.232588 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.232866 kubelet[3280]: E0912 17:37:26.232720 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.233382 kubelet[3280]: E0912 17:37:26.233365 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.233455 kubelet[3280]: W0912 17:37:26.233383 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.233455 kubelet[3280]: E0912 17:37:26.233398 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.235235 kubelet[3280]: E0912 17:37:26.235215 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.235235 kubelet[3280]: W0912 17:37:26.235234 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.235352 kubelet[3280]: E0912 17:37:26.235264 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.235577 kubelet[3280]: E0912 17:37:26.235559 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.235577 kubelet[3280]: W0912 17:37:26.235576 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.235698 kubelet[3280]: E0912 17:37:26.235591 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.236591 kubelet[3280]: E0912 17:37:26.236573 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.236591 kubelet[3280]: W0912 17:37:26.236591 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.237111 kubelet[3280]: E0912 17:37:26.236606 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.244177 containerd[1989]: time="2025-09-12T17:37:26.243813972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mgkrd,Uid:ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:26.303724 containerd[1989]: time="2025-09-12T17:37:26.302651699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:26.303724 containerd[1989]: time="2025-09-12T17:37:26.302988672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:26.303724 containerd[1989]: time="2025-09-12T17:37:26.303021611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:26.303724 containerd[1989]: time="2025-09-12T17:37:26.303136472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:26.344303 kubelet[3280]: E0912 17:37:26.343876 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.344303 kubelet[3280]: W0912 17:37:26.343908 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.344303 kubelet[3280]: E0912 17:37:26.343993 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.345165 kubelet[3280]: E0912 17:37:26.344705 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.345165 kubelet[3280]: W0912 17:37:26.344726 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.345165 kubelet[3280]: E0912 17:37:26.345079 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.350649 kubelet[3280]: E0912 17:37:26.347510 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.350649 kubelet[3280]: W0912 17:37:26.347528 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.350649 kubelet[3280]: E0912 17:37:26.348135 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.350649 kubelet[3280]: E0912 17:37:26.348886 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.350649 kubelet[3280]: W0912 17:37:26.348901 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.350649 kubelet[3280]: E0912 17:37:26.348925 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.350938 kubelet[3280]: E0912 17:37:26.350878 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.350938 kubelet[3280]: W0912 17:37:26.350893 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.355493 kubelet[3280]: E0912 17:37:26.351164 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.355493 kubelet[3280]: E0912 17:37:26.352495 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.355493 kubelet[3280]: W0912 17:37:26.352510 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.355493 kubelet[3280]: E0912 17:37:26.352971 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.355493 kubelet[3280]: E0912 17:37:26.354396 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.355493 kubelet[3280]: W0912 17:37:26.354410 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.355493 kubelet[3280]: E0912 17:37:26.355351 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.358969 kubelet[3280]: E0912 17:37:26.356252 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.358969 kubelet[3280]: W0912 17:37:26.356273 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.358969 kubelet[3280]: E0912 17:37:26.356824 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.359208 kubelet[3280]: E0912 17:37:26.359189 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.359270 kubelet[3280]: W0912 17:37:26.359210 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.359563 kubelet[3280]: E0912 17:37:26.359543 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.361782 kubelet[3280]: E0912 17:37:26.360255 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.361782 kubelet[3280]: W0912 17:37:26.360274 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.361782 kubelet[3280]: E0912 17:37:26.361626 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.361978 kubelet[3280]: W0912 17:37:26.361695 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.372002 kubelet[3280]: E0912 17:37:26.369271 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.372002 kubelet[3280]: E0912 17:37:26.370946 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.372002 kubelet[3280]: W0912 17:37:26.370963 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.372002 kubelet[3280]: E0912 17:37:26.371464 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.375201 systemd[1]: Started cri-containerd-d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578.scope - libcontainer container d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578. Sep 12 17:37:26.376197 kubelet[3280]: E0912 17:37:26.375923 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.376197 kubelet[3280]: W0912 17:37:26.375944 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.381016 kubelet[3280]: E0912 17:37:26.380122 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.381016 kubelet[3280]: W0912 17:37:26.380573 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.382327 kubelet[3280]: E0912 17:37:26.382177 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.382451 kubelet[3280]: E0912 17:37:26.382367 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.384896 kubelet[3280]: E0912 17:37:26.382512 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.387349 kubelet[3280]: E0912 17:37:26.387225 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.387349 kubelet[3280]: W0912 17:37:26.387251 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.389673 kubelet[3280]: E0912 17:37:26.388826 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.392030 kubelet[3280]: E0912 17:37:26.391956 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.392030 kubelet[3280]: W0912 17:37:26.391981 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.393573 kubelet[3280]: E0912 17:37:26.393540 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.394629 kubelet[3280]: E0912 17:37:26.394141 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.394743 kubelet[3280]: W0912 17:37:26.394630 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.395900 kubelet[3280]: E0912 17:37:26.395869 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.396763 kubelet[3280]: E0912 17:37:26.396373 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.396763 kubelet[3280]: W0912 17:37:26.396389 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.396763 kubelet[3280]: E0912 17:37:26.396492 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.396763 kubelet[3280]: E0912 17:37:26.396748 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.396763 kubelet[3280]: W0912 17:37:26.396760 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.397041 kubelet[3280]: E0912 17:37:26.396826 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.397325 kubelet[3280]: E0912 17:37:26.397305 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.397325 kubelet[3280]: W0912 17:37:26.397324 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.397428 kubelet[3280]: E0912 17:37:26.397360 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.398355 kubelet[3280]: E0912 17:37:26.398334 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.399377 kubelet[3280]: W0912 17:37:26.398376 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.399450 kubelet[3280]: E0912 17:37:26.399400 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.399810 kubelet[3280]: E0912 17:37:26.399772 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.399874 kubelet[3280]: W0912 17:37:26.399817 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.399980 kubelet[3280]: E0912 17:37:26.399847 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.400810 kubelet[3280]: E0912 17:37:26.400210 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.400810 kubelet[3280]: W0912 17:37:26.400224 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.400810 kubelet[3280]: E0912 17:37:26.400241 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.401078 kubelet[3280]: E0912 17:37:26.401063 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.401134 kubelet[3280]: W0912 17:37:26.401079 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.401565 kubelet[3280]: E0912 17:37:26.401542 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.402152 kubelet[3280]: E0912 17:37:26.402132 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.402238 kubelet[3280]: W0912 17:37:26.402153 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.402238 kubelet[3280]: E0912 17:37:26.402169 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.443808 kubelet[3280]: E0912 17:37:26.443765 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:26.446635 kubelet[3280]: W0912 17:37:26.446592 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:26.446962 kubelet[3280]: E0912 17:37:26.446924 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:26.478931 containerd[1989]: time="2025-09-12T17:37:26.478863676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mgkrd,Uid:ae8e1b6f-3400-4d6d-b9d9-8408a4bb454e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\"" Sep 12 17:37:27.477583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466110078.mount: Deactivated successfully. Sep 12 17:37:28.238450 kubelet[3280]: E0912 17:37:28.238399 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:28.879600 containerd[1989]: time="2025-09-12T17:37:28.879548202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:28.880978 containerd[1989]: time="2025-09-12T17:37:28.880804596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:37:28.883620 containerd[1989]: time="2025-09-12T17:37:28.882247036Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:28.885787 containerd[1989]: time="2025-09-12T17:37:28.884873200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:28.885787 containerd[1989]: time="2025-09-12T17:37:28.885557457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.796892445s" Sep 12 17:37:28.885787 containerd[1989]: time="2025-09-12T17:37:28.885593998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:37:28.887498 containerd[1989]: time="2025-09-12T17:37:28.887273769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:37:28.910452 containerd[1989]: time="2025-09-12T17:37:28.910404381Z" level=info msg="CreateContainer within sandbox \"31d7eb818c3fab70ad5880082b3e54878d8cffa2fbcdedc09df13ea3c007aac2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:37:28.931968 containerd[1989]: time="2025-09-12T17:37:28.931855823Z" level=info msg="CreateContainer within sandbox \"31d7eb818c3fab70ad5880082b3e54878d8cffa2fbcdedc09df13ea3c007aac2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c6b46ca87874507b1af0a382f172be3d19503a4c36e48577211e2ad4a468425\"" Sep 12 17:37:28.932974 containerd[1989]: time="2025-09-12T17:37:28.932770024Z" level=info msg="StartContainer for \"2c6b46ca87874507b1af0a382f172be3d19503a4c36e48577211e2ad4a468425\"" Sep 12 17:37:28.978063 systemd[1]: Started cri-containerd-2c6b46ca87874507b1af0a382f172be3d19503a4c36e48577211e2ad4a468425.scope - libcontainer container 2c6b46ca87874507b1af0a382f172be3d19503a4c36e48577211e2ad4a468425. Sep 12 17:37:29.053190 containerd[1989]: time="2025-09-12T17:37:29.053116740Z" level=info msg="StartContainer for \"2c6b46ca87874507b1af0a382f172be3d19503a4c36e48577211e2ad4a468425\" returns successfully" Sep 12 17:37:29.453133 kubelet[3280]: I0912 17:37:29.453084 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bb754bcd7-t7q6m" podStartSLOduration=1.654312236 podStartE2EDuration="4.453068371s" podCreationTimestamp="2025-09-12 17:37:25 +0000 UTC" firstStartedPulling="2025-09-12 17:37:26.088130436 +0000 UTC m=+20.383924054" lastFinishedPulling="2025-09-12 17:37:28.886886571 +0000 UTC m=+23.182680189" observedRunningTime="2025-09-12 17:37:29.452173929 +0000 UTC m=+23.747967553" watchObservedRunningTime="2025-09-12 17:37:29.453068371 +0000 UTC m=+23.748861994" Sep 12 17:37:29.526098 kubelet[3280]: E0912 17:37:29.526056 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.526098 kubelet[3280]: W0912 17:37:29.526083 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.526258 kubelet[3280]: E0912 17:37:29.526109 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.526430 kubelet[3280]: E0912 17:37:29.526402 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.526430 kubelet[3280]: W0912 17:37:29.526424 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.526567 kubelet[3280]: E0912 17:37:29.526445 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.526671 kubelet[3280]: E0912 17:37:29.526656 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.526671 kubelet[3280]: W0912 17:37:29.526668 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.526755 kubelet[3280]: E0912 17:37:29.526681 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.526944 kubelet[3280]: E0912 17:37:29.526925 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.526944 kubelet[3280]: W0912 17:37:29.526941 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.527064 kubelet[3280]: E0912 17:37:29.526953 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.527161 kubelet[3280]: E0912 17:37:29.527153 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.527161 kubelet[3280]: W0912 17:37:29.527160 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.527337 kubelet[3280]: E0912 17:37:29.527168 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.527337 kubelet[3280]: E0912 17:37:29.527335 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.527411 kubelet[3280]: W0912 17:37:29.527341 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.527411 kubelet[3280]: E0912 17:37:29.527350 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.527516 kubelet[3280]: E0912 17:37:29.527500 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.527516 kubelet[3280]: W0912 17:37:29.527511 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.527589 kubelet[3280]: E0912 17:37:29.527519 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.527686 kubelet[3280]: E0912 17:37:29.527667 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.527686 kubelet[3280]: W0912 17:37:29.527677 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.527686 kubelet[3280]: E0912 17:37:29.527685 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.527913 kubelet[3280]: E0912 17:37:29.527891 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.527913 kubelet[3280]: W0912 17:37:29.527898 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.527913 kubelet[3280]: E0912 17:37:29.527906 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.528157 kubelet[3280]: E0912 17:37:29.528135 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.528157 kubelet[3280]: W0912 17:37:29.528157 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.528343 kubelet[3280]: E0912 17:37:29.528175 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.528447 kubelet[3280]: E0912 17:37:29.528427 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.528447 kubelet[3280]: W0912 17:37:29.528443 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.528572 kubelet[3280]: E0912 17:37:29.528457 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.528697 kubelet[3280]: E0912 17:37:29.528681 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.528697 kubelet[3280]: W0912 17:37:29.528694 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.528948 kubelet[3280]: E0912 17:37:29.528707 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.529015 kubelet[3280]: E0912 17:37:29.528982 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.529015 kubelet[3280]: W0912 17:37:29.528993 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.529015 kubelet[3280]: E0912 17:37:29.529006 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.529246 kubelet[3280]: E0912 17:37:29.529234 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.529246 kubelet[3280]: W0912 17:37:29.529243 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.529479 kubelet[3280]: E0912 17:37:29.529257 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.529479 kubelet[3280]: E0912 17:37:29.529477 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.529571 kubelet[3280]: W0912 17:37:29.529488 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.529571 kubelet[3280]: E0912 17:37:29.529501 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.610272 kubelet[3280]: E0912 17:37:29.610237 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.610272 kubelet[3280]: W0912 17:37:29.610261 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.610272 kubelet[3280]: E0912 17:37:29.610283 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.610681 kubelet[3280]: E0912 17:37:29.610552 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.610681 kubelet[3280]: W0912 17:37:29.610564 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.610681 kubelet[3280]: E0912 17:37:29.610575 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.611007 kubelet[3280]: E0912 17:37:29.610851 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.611007 kubelet[3280]: W0912 17:37:29.610866 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.611007 kubelet[3280]: E0912 17:37:29.610889 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.611370 kubelet[3280]: E0912 17:37:29.611351 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.611370 kubelet[3280]: W0912 17:37:29.611367 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.611450 kubelet[3280]: E0912 17:37:29.611385 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.611634 kubelet[3280]: E0912 17:37:29.611613 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.611634 kubelet[3280]: W0912 17:37:29.611627 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.611771 kubelet[3280]: E0912 17:37:29.611644 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.611899 kubelet[3280]: E0912 17:37:29.611882 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.611899 kubelet[3280]: W0912 17:37:29.611896 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.612010 kubelet[3280]: E0912 17:37:29.611993 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.612490 kubelet[3280]: E0912 17:37:29.612472 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.612490 kubelet[3280]: W0912 17:37:29.612486 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.612669 kubelet[3280]: E0912 17:37:29.612553 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.612669 kubelet[3280]: E0912 17:37:29.612654 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.612669 kubelet[3280]: W0912 17:37:29.612660 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.612810 kubelet[3280]: E0912 17:37:29.612764 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.612878 kubelet[3280]: E0912 17:37:29.612865 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.612878 kubelet[3280]: W0912 17:37:29.612873 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.612978 kubelet[3280]: E0912 17:37:29.612888 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.613204 kubelet[3280]: E0912 17:37:29.613080 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.613204 kubelet[3280]: W0912 17:37:29.613092 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.613204 kubelet[3280]: E0912 17:37:29.613109 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.613449 kubelet[3280]: E0912 17:37:29.613431 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.613449 kubelet[3280]: W0912 17:37:29.613446 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.613515 kubelet[3280]: E0912 17:37:29.613464 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.613893 kubelet[3280]: E0912 17:37:29.613875 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.613893 kubelet[3280]: W0912 17:37:29.613888 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.613988 kubelet[3280]: E0912 17:37:29.613906 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.614153 kubelet[3280]: E0912 17:37:29.614135 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.614153 kubelet[3280]: W0912 17:37:29.614149 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.614453 kubelet[3280]: E0912 17:37:29.614203 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.614453 kubelet[3280]: E0912 17:37:29.614358 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.614453 kubelet[3280]: W0912 17:37:29.614367 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.614453 kubelet[3280]: E0912 17:37:29.614379 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.614558 kubelet[3280]: E0912 17:37:29.614539 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.614558 kubelet[3280]: W0912 17:37:29.614546 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.614608 kubelet[3280]: E0912 17:37:29.614559 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.615038 kubelet[3280]: E0912 17:37:29.614825 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.615038 kubelet[3280]: W0912 17:37:29.614840 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.615038 kubelet[3280]: E0912 17:37:29.614865 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.615169 kubelet[3280]: E0912 17:37:29.615088 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.615169 kubelet[3280]: W0912 17:37:29.615097 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.615169 kubelet[3280]: E0912 17:37:29.615128 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:29.615348 kubelet[3280]: E0912 17:37:29.615330 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:29.615348 kubelet[3280]: W0912 17:37:29.615343 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:29.615455 kubelet[3280]: E0912 17:37:29.615354 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:30.126235 containerd[1989]: time="2025-09-12T17:37:30.126177544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:30.127607 containerd[1989]: time="2025-09-12T17:37:30.127421532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:37:30.130686 containerd[1989]: time="2025-09-12T17:37:30.128967973Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:30.132825 containerd[1989]: time="2025-09-12T17:37:30.132063108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:30.132825 containerd[1989]: time="2025-09-12T17:37:30.132666750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.245361072s" Sep 12 17:37:30.132825 containerd[1989]: time="2025-09-12T17:37:30.132697107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:37:30.135508 containerd[1989]: time="2025-09-12T17:37:30.135481989Z" level=info msg="CreateContainer within sandbox \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:37:30.161056 containerd[1989]: time="2025-09-12T17:37:30.161005635Z" level=info msg="CreateContainer within sandbox \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2\"" Sep 12 17:37:30.163361 containerd[1989]: time="2025-09-12T17:37:30.162085401Z" level=info msg="StartContainer for \"6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2\"" Sep 12 17:37:30.163076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131486346.mount: Deactivated successfully. Sep 12 17:37:30.237008 kubelet[3280]: E0912 17:37:30.236946 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:30.246375 systemd[1]: Started cri-containerd-6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2.scope - libcontainer container 6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2. Sep 12 17:37:30.284091 containerd[1989]: time="2025-09-12T17:37:30.283907816Z" level=info msg="StartContainer for \"6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2\" returns successfully" Sep 12 17:37:30.298031 systemd[1]: cri-containerd-6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2.scope: Deactivated successfully. Sep 12 17:37:30.447823 kubelet[3280]: I0912 17:37:30.446231 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:37:30.619891 containerd[1989]: time="2025-09-12T17:37:30.607898627Z" level=info msg="shim disconnected" id=6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2 namespace=k8s.io Sep 12 17:37:30.622132 containerd[1989]: time="2025-09-12T17:37:30.621837280Z" level=warning msg="cleaning up after shim disconnected" id=6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2 namespace=k8s.io Sep 12 17:37:30.622132 containerd[1989]: time="2025-09-12T17:37:30.621866985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:30.896011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6141233b3379b4b1908e35bd9b5403bf15d22df546438f2365046a74dd2748a2-rootfs.mount: Deactivated successfully. Sep 12 17:37:31.456886 containerd[1989]: time="2025-09-12T17:37:31.456805771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:37:32.237457 kubelet[3280]: E0912 17:37:32.236998 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:34.237910 kubelet[3280]: E0912 17:37:34.236646 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:35.829124 containerd[1989]: time="2025-09-12T17:37:35.829056017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:35.830395 containerd[1989]: time="2025-09-12T17:37:35.830164755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:37:35.832780 containerd[1989]: time="2025-09-12T17:37:35.831753632Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:35.835324 containerd[1989]: time="2025-09-12T17:37:35.835263854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:35.836139 containerd[1989]: time="2025-09-12T17:37:35.836100199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.379244546s" Sep 12 17:37:35.836139 containerd[1989]: time="2025-09-12T17:37:35.836134611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:37:35.839951 containerd[1989]: time="2025-09-12T17:37:35.839896934Z" level=info msg="CreateContainer within sandbox \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:37:35.870218 containerd[1989]: time="2025-09-12T17:37:35.870165340Z" level=info msg="CreateContainer within sandbox \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c\"" Sep 12 17:37:35.870887 containerd[1989]: time="2025-09-12T17:37:35.870743383Z" level=info msg="StartContainer for \"2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c\"" Sep 12 17:37:35.915061 systemd[1]: Started cri-containerd-2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c.scope - libcontainer container 2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c. Sep 12 17:37:35.952275 containerd[1989]: time="2025-09-12T17:37:35.952078560Z" level=info msg="StartContainer for \"2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c\" returns successfully" Sep 12 17:37:36.251505 kubelet[3280]: E0912 17:37:36.250892 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:36.773065 systemd[1]: cri-containerd-2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c.scope: Deactivated successfully. Sep 12 17:37:36.843570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c-rootfs.mount: Deactivated successfully. Sep 12 17:37:36.862438 kubelet[3280]: I0912 17:37:36.861776 3280 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:37:37.076678 containerd[1989]: time="2025-09-12T17:37:37.075936953Z" level=info msg="shim disconnected" id=2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c namespace=k8s.io Sep 12 17:37:37.076678 containerd[1989]: time="2025-09-12T17:37:37.076030990Z" level=warning msg="cleaning up after shim disconnected" id=2a7ce0fd5da1064fa268f38d4fc6c92db98d97e7caff7f549461bafdf47d6c7c namespace=k8s.io Sep 12 17:37:37.076678 containerd[1989]: time="2025-09-12T17:37:37.076044556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:37.094824 kubelet[3280]: W0912 17:37:37.094373 3280 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-19-87" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-87' and this object Sep 12 17:37:37.098353 kubelet[3280]: E0912 17:37:37.098080 3280 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-19-87\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-87' and this object" logger="UnhandledError" Sep 12 17:37:37.110015 containerd[1989]: time="2025-09-12T17:37:37.109968558Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:37:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:37:37.111874 systemd[1]: Created slice kubepods-burstable-pod1e120bb6_e43d_4b4a_912f_d1447d2d9f1e.slice - libcontainer container kubepods-burstable-pod1e120bb6_e43d_4b4a_912f_d1447d2d9f1e.slice. Sep 12 17:37:37.126821 systemd[1]: Created slice kubepods-besteffort-pod3cc441c2_b040_4172_b62f_169ce9d45b04.slice - libcontainer container kubepods-besteffort-pod3cc441c2_b040_4172_b62f_169ce9d45b04.slice. Sep 12 17:37:37.150394 systemd[1]: Created slice kubepods-besteffort-podf13ccb5e_3497_40e3_9a44_1d13e18105b6.slice - libcontainer container kubepods-besteffort-podf13ccb5e_3497_40e3_9a44_1d13e18105b6.slice. Sep 12 17:37:37.158689 systemd[1]: Created slice kubepods-besteffort-podbbd39c69_a865_4ebd_9b70_4fae310ae712.slice - libcontainer container kubepods-besteffort-podbbd39c69_a865_4ebd_9b70_4fae310ae712.slice. Sep 12 17:37:37.170856 systemd[1]: Created slice kubepods-burstable-pod8306a462_dd92_4fcd_bcfb_5dc368adabea.slice - libcontainer container kubepods-burstable-pod8306a462_dd92_4fcd_bcfb_5dc368adabea.slice. Sep 12 17:37:37.184149 kubelet[3280]: I0912 17:37:37.184095 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8306a462-dd92-4fcd-bcfb-5dc368adabea-config-volume\") pod \"coredns-668d6bf9bc-dr46r\" (UID: \"8306a462-dd92-4fcd-bcfb-5dc368adabea\") " pod="kube-system/coredns-668d6bf9bc-dr46r" Sep 12 17:37:37.184149 kubelet[3280]: I0912 17:37:37.184148 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xv6k\" (UniqueName: \"kubernetes.io/projected/e49d7f36-b7fa-455a-b787-1fea47393279-kube-api-access-6xv6k\") pod \"calico-apiserver-7c76cc97b8-g6gkc\" (UID: \"e49d7f36-b7fa-455a-b787-1fea47393279\") " pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" Sep 12 17:37:37.184463 kubelet[3280]: I0912 17:37:37.184176 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cnd\" (UniqueName: \"kubernetes.io/projected/0d1a0c17-6003-426a-b7b3-9d6d213504ae-kube-api-access-72cnd\") pod \"calico-apiserver-7c76cc97b8-jfrtm\" (UID: \"0d1a0c17-6003-426a-b7b3-9d6d213504ae\") " pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" Sep 12 17:37:37.184463 kubelet[3280]: I0912 17:37:37.184201 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e120bb6-e43d-4b4a-912f-d1447d2d9f1e-config-volume\") pod \"coredns-668d6bf9bc-cbsml\" (UID: \"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e\") " pod="kube-system/coredns-668d6bf9bc-cbsml" Sep 12 17:37:37.184463 kubelet[3280]: I0912 17:37:37.184229 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd39c69-a865-4ebd-9b70-4fae310ae712-tigera-ca-bundle\") pod \"calico-kube-controllers-5f68b6db8c-kwzx4\" (UID: \"bbd39c69-a865-4ebd-9b70-4fae310ae712\") " pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" Sep 12 17:37:37.184463 kubelet[3280]: I0912 17:37:37.184257 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk94\" (UniqueName: \"kubernetes.io/projected/1e120bb6-e43d-4b4a-912f-d1447d2d9f1e-kube-api-access-jmk94\") pod \"coredns-668d6bf9bc-cbsml\" (UID: \"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e\") " pod="kube-system/coredns-668d6bf9bc-cbsml" Sep 12 17:37:37.184463 kubelet[3280]: I0912 17:37:37.184286 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f13ccb5e-3497-40e3-9a44-1d13e18105b6-goldmane-key-pair\") pod \"goldmane-54d579b49d-5kcf7\" (UID: \"f13ccb5e-3497-40e3-9a44-1d13e18105b6\") " pod="calico-system/goldmane-54d579b49d-5kcf7" Sep 12 17:37:37.184705 kubelet[3280]: I0912 17:37:37.184309 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gtf\" (UniqueName: \"kubernetes.io/projected/f13ccb5e-3497-40e3-9a44-1d13e18105b6-kube-api-access-67gtf\") pod \"goldmane-54d579b49d-5kcf7\" (UID: \"f13ccb5e-3497-40e3-9a44-1d13e18105b6\") " pod="calico-system/goldmane-54d579b49d-5kcf7" Sep 12 17:37:37.184705 kubelet[3280]: I0912 17:37:37.184338 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d1a0c17-6003-426a-b7b3-9d6d213504ae-calico-apiserver-certs\") pod \"calico-apiserver-7c76cc97b8-jfrtm\" (UID: \"0d1a0c17-6003-426a-b7b3-9d6d213504ae\") " pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" Sep 12 17:37:37.184705 kubelet[3280]: I0912 17:37:37.184362 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13ccb5e-3497-40e3-9a44-1d13e18105b6-config\") pod \"goldmane-54d579b49d-5kcf7\" (UID: \"f13ccb5e-3497-40e3-9a44-1d13e18105b6\") " pod="calico-system/goldmane-54d579b49d-5kcf7" Sep 12 17:37:37.184705 kubelet[3280]: I0912 17:37:37.184387 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e49d7f36-b7fa-455a-b787-1fea47393279-calico-apiserver-certs\") pod \"calico-apiserver-7c76cc97b8-g6gkc\" (UID: \"e49d7f36-b7fa-455a-b787-1fea47393279\") " pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" Sep 12 17:37:37.184705 kubelet[3280]: I0912 17:37:37.184410 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rwp\" (UniqueName: \"kubernetes.io/projected/bbd39c69-a865-4ebd-9b70-4fae310ae712-kube-api-access-84rwp\") pod \"calico-kube-controllers-5f68b6db8c-kwzx4\" (UID: \"bbd39c69-a865-4ebd-9b70-4fae310ae712\") " pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" Sep 12 17:37:37.185127 kubelet[3280]: I0912 17:37:37.184435 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f13ccb5e-3497-40e3-9a44-1d13e18105b6-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-5kcf7\" (UID: \"f13ccb5e-3497-40e3-9a44-1d13e18105b6\") " pod="calico-system/goldmane-54d579b49d-5kcf7" Sep 12 17:37:37.185127 kubelet[3280]: I0912 17:37:37.184462 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxvd\" (UniqueName: \"kubernetes.io/projected/8306a462-dd92-4fcd-bcfb-5dc368adabea-kube-api-access-qhxvd\") pod \"coredns-668d6bf9bc-dr46r\" (UID: \"8306a462-dd92-4fcd-bcfb-5dc368adabea\") " pod="kube-system/coredns-668d6bf9bc-dr46r" Sep 12 17:37:37.185144 systemd[1]: Created slice kubepods-besteffort-pode49d7f36_b7fa_455a_b787_1fea47393279.slice - libcontainer container kubepods-besteffort-pode49d7f36_b7fa_455a_b787_1fea47393279.slice. Sep 12 17:37:37.196351 systemd[1]: Created slice kubepods-besteffort-pod0d1a0c17_6003_426a_b7b3_9d6d213504ae.slice - libcontainer container kubepods-besteffort-pod0d1a0c17_6003_426a_b7b3_9d6d213504ae.slice. Sep 12 17:37:37.285498 kubelet[3280]: I0912 17:37:37.285452 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-ca-bundle\") pod \"whisker-779877b74-ns76l\" (UID: \"3cc441c2-b040-4172-b62f-169ce9d45b04\") " pod="calico-system/whisker-779877b74-ns76l" Sep 12 17:37:37.286774 kubelet[3280]: I0912 17:37:37.285523 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-backend-key-pair\") pod \"whisker-779877b74-ns76l\" (UID: \"3cc441c2-b040-4172-b62f-169ce9d45b04\") " pod="calico-system/whisker-779877b74-ns76l" Sep 12 17:37:37.286774 kubelet[3280]: I0912 17:37:37.285724 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4q97\" (UniqueName: \"kubernetes.io/projected/3cc441c2-b040-4172-b62f-169ce9d45b04-kube-api-access-g4q97\") pod \"whisker-779877b74-ns76l\" (UID: \"3cc441c2-b040-4172-b62f-169ce9d45b04\") " pod="calico-system/whisker-779877b74-ns76l" Sep 12 17:37:37.428676 containerd[1989]: time="2025-09-12T17:37:37.427618381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbsml,Uid:1e120bb6-e43d-4b4a-912f-d1447d2d9f1e,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:37.438784 containerd[1989]: time="2025-09-12T17:37:37.438487740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779877b74-ns76l,Uid:3cc441c2-b040-4172-b62f-169ce9d45b04,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:37.465579 containerd[1989]: time="2025-09-12T17:37:37.465542761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68b6db8c-kwzx4,Uid:bbd39c69-a865-4ebd-9b70-4fae310ae712,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:37.498134 containerd[1989]: time="2025-09-12T17:37:37.498087928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-g6gkc,Uid:e49d7f36-b7fa-455a-b787-1fea47393279,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:37:37.499176 containerd[1989]: time="2025-09-12T17:37:37.499128802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dr46r,Uid:8306a462-dd92-4fcd-bcfb-5dc368adabea,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:37.509445 containerd[1989]: time="2025-09-12T17:37:37.509371618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-jfrtm,Uid:0d1a0c17-6003-426a-b7b3-9d6d213504ae,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:37:37.510897 containerd[1989]: time="2025-09-12T17:37:37.509375752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:37:37.880999 containerd[1989]: time="2025-09-12T17:37:37.880944004Z" level=error msg="Failed to destroy network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.898915 containerd[1989]: time="2025-09-12T17:37:37.898845087Z" level=error msg="encountered an error cleaning up failed sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.899205 containerd[1989]: time="2025-09-12T17:37:37.899170608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dr46r,Uid:8306a462-dd92-4fcd-bcfb-5dc368adabea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.902296 containerd[1989]: time="2025-09-12T17:37:37.901267148Z" level=error msg="Failed to destroy network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.904216 containerd[1989]: time="2025-09-12T17:37:37.904083553Z" level=error msg="encountered an error cleaning up failed sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.904216 containerd[1989]: time="2025-09-12T17:37:37.904164648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-jfrtm,Uid:0d1a0c17-6003-426a-b7b3-9d6d213504ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.913048 kubelet[3280]: E0912 17:37:37.912996 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.913341 kubelet[3280]: E0912 17:37:37.913221 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.913477 kubelet[3280]: E0912 17:37:37.913456 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" Sep 12 17:37:37.913767 kubelet[3280]: E0912 17:37:37.913558 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" Sep 12 17:37:37.913767 kubelet[3280]: E0912 17:37:37.913631 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c76cc97b8-jfrtm_calico-apiserver(0d1a0c17-6003-426a-b7b3-9d6d213504ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c76cc97b8-jfrtm_calico-apiserver(0d1a0c17-6003-426a-b7b3-9d6d213504ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" podUID="0d1a0c17-6003-426a-b7b3-9d6d213504ae" Sep 12 17:37:37.915811 kubelet[3280]: E0912 17:37:37.914609 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dr46r" Sep 12 17:37:37.915811 kubelet[3280]: E0912 17:37:37.914655 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dr46r" Sep 12 17:37:37.915811 kubelet[3280]: E0912 17:37:37.914706 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dr46r_kube-system(8306a462-dd92-4fcd-bcfb-5dc368adabea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dr46r_kube-system(8306a462-dd92-4fcd-bcfb-5dc368adabea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dr46r" podUID="8306a462-dd92-4fcd-bcfb-5dc368adabea" Sep 12 17:37:37.938205 containerd[1989]: time="2025-09-12T17:37:37.938136083Z" level=error msg="Failed to destroy network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.940119 containerd[1989]: time="2025-09-12T17:37:37.939685681Z" level=error msg="encountered an error cleaning up failed sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.940289 containerd[1989]: time="2025-09-12T17:37:37.940245416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-g6gkc,Uid:e49d7f36-b7fa-455a-b787-1fea47393279,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.941619 kubelet[3280]: E0912 17:37:37.941129 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.941619 kubelet[3280]: E0912 17:37:37.941200 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" Sep 12 17:37:37.941619 kubelet[3280]: E0912 17:37:37.941234 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" Sep 12 17:37:37.942121 kubelet[3280]: E0912 17:37:37.941286 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c76cc97b8-g6gkc_calico-apiserver(e49d7f36-b7fa-455a-b787-1fea47393279)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c76cc97b8-g6gkc_calico-apiserver(e49d7f36-b7fa-455a-b787-1fea47393279)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" podUID="e49d7f36-b7fa-455a-b787-1fea47393279" Sep 12 17:37:37.947284 containerd[1989]: time="2025-09-12T17:37:37.947239784Z" level=error msg="Failed to destroy network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.947620 containerd[1989]: time="2025-09-12T17:37:37.947585408Z" level=error msg="encountered an error cleaning up failed sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.947717 containerd[1989]: time="2025-09-12T17:37:37.947651200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779877b74-ns76l,Uid:3cc441c2-b040-4172-b62f-169ce9d45b04,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.949880 kubelet[3280]: E0912 17:37:37.948903 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.949880 kubelet[3280]: E0912 17:37:37.948990 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-779877b74-ns76l" Sep 12 17:37:37.949880 kubelet[3280]: E0912 17:37:37.949023 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-779877b74-ns76l" Sep 12 17:37:37.950089 kubelet[3280]: E0912 17:37:37.949074 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-779877b74-ns76l_calico-system(3cc441c2-b040-4172-b62f-169ce9d45b04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-779877b74-ns76l_calico-system(3cc441c2-b040-4172-b62f-169ce9d45b04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-779877b74-ns76l" podUID="3cc441c2-b040-4172-b62f-169ce9d45b04" Sep 12 17:37:37.950199 containerd[1989]: time="2025-09-12T17:37:37.949997925Z" level=error msg="Failed to destroy network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.950371 containerd[1989]: time="2025-09-12T17:37:37.950334380Z" level=error msg="encountered an error cleaning up failed sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.950438 containerd[1989]: time="2025-09-12T17:37:37.950400868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68b6db8c-kwzx4,Uid:bbd39c69-a865-4ebd-9b70-4fae310ae712,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.950914 kubelet[3280]: E0912 17:37:37.950597 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.950914 kubelet[3280]: E0912 17:37:37.950649 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" Sep 12 17:37:37.950914 kubelet[3280]: E0912 17:37:37.950676 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" Sep 12 17:37:37.951114 kubelet[3280]: E0912 17:37:37.950722 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f68b6db8c-kwzx4_calico-system(bbd39c69-a865-4ebd-9b70-4fae310ae712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f68b6db8c-kwzx4_calico-system(bbd39c69-a865-4ebd-9b70-4fae310ae712)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" podUID="bbd39c69-a865-4ebd-9b70-4fae310ae712" Sep 12 17:37:37.952759 containerd[1989]: time="2025-09-12T17:37:37.952721280Z" level=error msg="Failed to destroy network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.953040 containerd[1989]: time="2025-09-12T17:37:37.953006427Z" level=error msg="encountered an error cleaning up failed sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.953105 containerd[1989]: time="2025-09-12T17:37:37.953070969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbsml,Uid:1e120bb6-e43d-4b4a-912f-d1447d2d9f1e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.953958 kubelet[3280]: E0912 17:37:37.953920 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.954052 kubelet[3280]: E0912 17:37:37.953983 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbsml" Sep 12 17:37:37.954052 kubelet[3280]: E0912 17:37:37.954012 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbsml" Sep 12 17:37:37.954153 kubelet[3280]: E0912 17:37:37.954064 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cbsml_kube-system(1e120bb6-e43d-4b4a-912f-d1447d2d9f1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cbsml_kube-system(1e120bb6-e43d-4b4a-912f-d1447d2d9f1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbsml" podUID="1e120bb6-e43d-4b4a-912f-d1447d2d9f1e" Sep 12 17:37:38.243124 systemd[1]: Created slice kubepods-besteffort-pod47dd082d_8313_4b06_a25a_46c1ffeb1afd.slice - libcontainer container kubepods-besteffort-pod47dd082d_8313_4b06_a25a_46c1ffeb1afd.slice. Sep 12 17:37:38.246972 containerd[1989]: time="2025-09-12T17:37:38.246911542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnq78,Uid:47dd082d-8313-4b06-a25a-46c1ffeb1afd,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:38.287282 kubelet[3280]: E0912 17:37:38.287242 3280 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 12 17:37:38.294128 kubelet[3280]: E0912 17:37:38.294045 3280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f13ccb5e-3497-40e3-9a44-1d13e18105b6-goldmane-key-pair podName:f13ccb5e-3497-40e3-9a44-1d13e18105b6 nodeName:}" failed. No retries permitted until 2025-09-12 17:37:38.787821709 +0000 UTC m=+33.083615317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/f13ccb5e-3497-40e3-9a44-1d13e18105b6-goldmane-key-pair") pod "goldmane-54d579b49d-5kcf7" (UID: "f13ccb5e-3497-40e3-9a44-1d13e18105b6") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:37:38.348774 containerd[1989]: time="2025-09-12T17:37:38.348716706Z" level=error msg="Failed to destroy network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.350415 containerd[1989]: time="2025-09-12T17:37:38.350197630Z" level=error msg="encountered an error cleaning up failed sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.350415 containerd[1989]: time="2025-09-12T17:37:38.350295750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnq78,Uid:47dd082d-8313-4b06-a25a-46c1ffeb1afd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.351761 kubelet[3280]: E0912 17:37:38.350683 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.351761 kubelet[3280]: E0912 17:37:38.350763 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:38.351761 kubelet[3280]: E0912 17:37:38.350813 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hnq78" Sep 12 17:37:38.351959 kubelet[3280]: E0912 17:37:38.351206 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hnq78_calico-system(47dd082d-8313-4b06-a25a-46c1ffeb1afd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hnq78_calico-system(47dd082d-8313-4b06-a25a-46c1ffeb1afd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:38.354446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859-shm.mount: Deactivated successfully. Sep 12 17:37:38.503115 kubelet[3280]: I0912 17:37:38.503002 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:37:38.505570 kubelet[3280]: I0912 17:37:38.505535 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:37:38.522680 kubelet[3280]: I0912 17:37:38.521786 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:37:38.524267 kubelet[3280]: I0912 17:37:38.524251 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:37:38.525561 kubelet[3280]: I0912 17:37:38.525546 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:37:38.529138 kubelet[3280]: I0912 17:37:38.529113 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:37:38.530894 kubelet[3280]: I0912 17:37:38.530875 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:37:38.551014 containerd[1989]: time="2025-09-12T17:37:38.549729831Z" level=info msg="StopPodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\"" Sep 12 17:37:38.551014 containerd[1989]: time="2025-09-12T17:37:38.550138317Z" level=info msg="StopPodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\"" Sep 12 17:37:38.551204 containerd[1989]: time="2025-09-12T17:37:38.551181179Z" level=info msg="Ensure that sandbox 61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a in task-service has been cleanup successfully" Sep 12 17:37:38.551861 containerd[1989]: time="2025-09-12T17:37:38.551828353Z" level=info msg="Ensure that sandbox d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf in task-service has been cleanup successfully" Sep 12 17:37:38.552543 containerd[1989]: time="2025-09-12T17:37:38.552519709Z" level=info msg="StopPodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\"" Sep 12 17:37:38.552676 containerd[1989]: time="2025-09-12T17:37:38.552653276Z" level=info msg="StopPodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\"" Sep 12 17:37:38.552945 containerd[1989]: time="2025-09-12T17:37:38.552924970Z" level=info msg="Ensure that sandbox 5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7 in task-service has been cleanup successfully" Sep 12 17:37:38.553083 containerd[1989]: time="2025-09-12T17:37:38.553058819Z" level=info msg="Ensure that sandbox 98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4 in task-service has been cleanup successfully" Sep 12 17:37:38.553728 containerd[1989]: time="2025-09-12T17:37:38.552606044Z" level=info msg="StopPodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\"" Sep 12 17:37:38.553926 containerd[1989]: time="2025-09-12T17:37:38.553911354Z" level=info msg="Ensure that sandbox d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859 in task-service has been cleanup successfully" Sep 12 17:37:38.555912 containerd[1989]: time="2025-09-12T17:37:38.555881893Z" level=info msg="StopPodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\"" Sep 12 17:37:38.556072 containerd[1989]: time="2025-09-12T17:37:38.552631679Z" level=info msg="StopPodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\"" Sep 12 17:37:38.556291 containerd[1989]: time="2025-09-12T17:37:38.556263508Z" level=info msg="Ensure that sandbox c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4 in task-service has been cleanup successfully" Sep 12 17:37:38.559532 containerd[1989]: time="2025-09-12T17:37:38.559506869Z" level=info msg="Ensure that sandbox c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34 in task-service has been cleanup successfully" Sep 12 17:37:38.657322 containerd[1989]: time="2025-09-12T17:37:38.657256705Z" level=error msg="StopPodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" failed" error="failed to destroy network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.657615 kubelet[3280]: E0912 17:37:38.657462 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:37:38.658819 containerd[1989]: time="2025-09-12T17:37:38.657808694Z" level=error msg="StopPodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" failed" error="failed to destroy network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.659681 kubelet[3280]: E0912 17:37:38.657952 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:37:38.668026 kubelet[3280]: E0912 17:37:38.657515 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4"} Sep 12 17:37:38.668153 kubelet[3280]: E0912 17:37:38.668051 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e49d7f36-b7fa-455a-b787-1fea47393279\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.668153 kubelet[3280]: E0912 17:37:38.668078 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e49d7f36-b7fa-455a-b787-1fea47393279\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" podUID="e49d7f36-b7fa-455a-b787-1fea47393279" Sep 12 17:37:38.668278 kubelet[3280]: E0912 17:37:38.657978 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859"} Sep 12 17:37:38.668278 kubelet[3280]: E0912 17:37:38.668181 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.668278 kubelet[3280]: E0912 17:37:38.668199 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47dd082d-8313-4b06-a25a-46c1ffeb1afd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hnq78" podUID="47dd082d-8313-4b06-a25a-46c1ffeb1afd" Sep 12 17:37:38.688734 containerd[1989]: time="2025-09-12T17:37:38.688450961Z" level=error msg="StopPodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" failed" error="failed to destroy network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.688734 containerd[1989]: time="2025-09-12T17:37:38.688604195Z" level=error msg="StopPodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" failed" error="failed to destroy network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.688734 containerd[1989]: time="2025-09-12T17:37:38.688673362Z" level=error msg="StopPodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" failed" error="failed to destroy network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.689053 kubelet[3280]: E0912 17:37:38.688747 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:37:38.689053 kubelet[3280]: E0912 17:37:38.688804 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4"} Sep 12 17:37:38.689053 kubelet[3280]: E0912 17:37:38.688834 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d1a0c17-6003-426a-b7b3-9d6d213504ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.689053 kubelet[3280]: E0912 17:37:38.688861 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d1a0c17-6003-426a-b7b3-9d6d213504ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" podUID="0d1a0c17-6003-426a-b7b3-9d6d213504ae" Sep 12 17:37:38.689239 kubelet[3280]: E0912 17:37:38.688919 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:37:38.689239 kubelet[3280]: E0912 17:37:38.688936 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf"} Sep 12 17:37:38.689239 kubelet[3280]: E0912 17:37:38.688953 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cc441c2-b040-4172-b62f-169ce9d45b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.689239 kubelet[3280]: E0912 17:37:38.688969 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cc441c2-b040-4172-b62f-169ce9d45b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-779877b74-ns76l" podUID="3cc441c2-b040-4172-b62f-169ce9d45b04" Sep 12 17:37:38.689376 kubelet[3280]: E0912 17:37:38.689076 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:37:38.689376 kubelet[3280]: E0912 17:37:38.689123 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7"} Sep 12 17:37:38.689376 kubelet[3280]: E0912 17:37:38.689142 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbd39c69-a865-4ebd-9b70-4fae310ae712\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.689376 kubelet[3280]: E0912 17:37:38.689157 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbd39c69-a865-4ebd-9b70-4fae310ae712\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" podUID="bbd39c69-a865-4ebd-9b70-4fae310ae712" Sep 12 17:37:38.693630 containerd[1989]: time="2025-09-12T17:37:38.693251440Z" level=error msg="StopPodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" failed" error="failed to destroy network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.693861 kubelet[3280]: E0912 17:37:38.693459 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:37:38.693861 kubelet[3280]: E0912 17:37:38.693502 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34"} Sep 12 17:37:38.693861 kubelet[3280]: E0912 17:37:38.693532 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8306a462-dd92-4fcd-bcfb-5dc368adabea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.693861 kubelet[3280]: E0912 17:37:38.693554 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8306a462-dd92-4fcd-bcfb-5dc368adabea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dr46r" podUID="8306a462-dd92-4fcd-bcfb-5dc368adabea" Sep 12 17:37:38.694983 containerd[1989]: time="2025-09-12T17:37:38.694938270Z" level=error msg="StopPodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" failed" error="failed to destroy network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:38.695148 kubelet[3280]: E0912 17:37:38.695106 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:37:38.695204 kubelet[3280]: E0912 17:37:38.695154 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a"} Sep 12 17:37:38.695204 kubelet[3280]: E0912 17:37:38.695185 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:38.695286 kubelet[3280]: E0912 17:37:38.695204 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbsml" podUID="1e120bb6-e43d-4b4a-912f-d1447d2d9f1e" Sep 12 17:37:38.959853 containerd[1989]: time="2025-09-12T17:37:38.959811634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5kcf7,Uid:f13ccb5e-3497-40e3-9a44-1d13e18105b6,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:39.048233 containerd[1989]: time="2025-09-12T17:37:39.048081696Z" level=error msg="Failed to destroy network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:39.055861 containerd[1989]: time="2025-09-12T17:37:39.055806950Z" level=error msg="encountered an error cleaning up failed sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:39.056532 containerd[1989]: time="2025-09-12T17:37:39.055882744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5kcf7,Uid:f13ccb5e-3497-40e3-9a44-1d13e18105b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:39.056680 kubelet[3280]: E0912 17:37:39.056124 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:39.056680 kubelet[3280]: E0912 17:37:39.056177 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5kcf7" Sep 12 17:37:39.056680 kubelet[3280]: E0912 17:37:39.056198 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5kcf7" Sep 12 17:37:39.056886 kubelet[3280]: E0912 17:37:39.056250 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-5kcf7_calico-system(f13ccb5e-3497-40e3-9a44-1d13e18105b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-5kcf7_calico-system(f13ccb5e-3497-40e3-9a44-1d13e18105b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5kcf7" podUID="f13ccb5e-3497-40e3-9a44-1d13e18105b6" Sep 12 17:37:39.295752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc-shm.mount: Deactivated successfully. Sep 12 17:37:39.552927 kubelet[3280]: I0912 17:37:39.551187 3280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:37:39.553490 containerd[1989]: time="2025-09-12T17:37:39.552219761Z" level=info msg="StopPodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\"" Sep 12 17:37:39.561881 containerd[1989]: time="2025-09-12T17:37:39.557130621Z" level=info msg="Ensure that sandbox b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc in task-service has been cleanup successfully" Sep 12 17:37:39.671651 containerd[1989]: time="2025-09-12T17:37:39.671145307Z" level=error msg="StopPodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" failed" error="failed to destroy network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:39.671934 kubelet[3280]: E0912 17:37:39.671580 3280 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:37:39.671934 kubelet[3280]: E0912 17:37:39.671641 3280 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc"} Sep 12 17:37:39.671934 kubelet[3280]: E0912 17:37:39.671781 3280 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f13ccb5e-3497-40e3-9a44-1d13e18105b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:39.671934 kubelet[3280]: E0912 17:37:39.671835 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f13ccb5e-3497-40e3-9a44-1d13e18105b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5kcf7" podUID="f13ccb5e-3497-40e3-9a44-1d13e18105b6" Sep 12 17:37:44.143654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927608853.mount: Deactivated successfully. Sep 12 17:37:44.229975 containerd[1989]: time="2025-09-12T17:37:44.229443377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:37:44.244863 containerd[1989]: time="2025-09-12T17:37:44.244826495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:44.269400 containerd[1989]: time="2025-09-12T17:37:44.269350784Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:44.271205 containerd[1989]: time="2025-09-12T17:37:44.271095342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:44.274529 containerd[1989]: time="2025-09-12T17:37:44.274335945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.760786239s" Sep 12 17:37:44.274529 containerd[1989]: time="2025-09-12T17:37:44.274392812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:37:44.315452 containerd[1989]: time="2025-09-12T17:37:44.315399820Z" level=info msg="CreateContainer within sandbox \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:37:44.467333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574665052.mount: Deactivated successfully. Sep 12 17:37:44.535002 containerd[1989]: time="2025-09-12T17:37:44.534952061Z" level=info msg="CreateContainer within sandbox \"d755b580f1703f3952f5475e52139a03d75bfb5b440f628ced284e04b4e7f578\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d\"" Sep 12 17:37:44.551739 containerd[1989]: time="2025-09-12T17:37:44.551698437Z" level=info msg="StartContainer for \"cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d\"" Sep 12 17:37:44.702020 systemd[1]: Started cri-containerd-cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d.scope - libcontainer container cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d. Sep 12 17:37:44.750239 containerd[1989]: time="2025-09-12T17:37:44.748833541Z" level=info msg="StartContainer for \"cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d\" returns successfully" Sep 12 17:37:44.867310 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:37:44.869039 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:37:45.258837 containerd[1989]: time="2025-09-12T17:37:45.258711394Z" level=info msg="StopPodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\"" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.411 [INFO][4666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.413 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" iface="eth0" netns="/var/run/netns/cni-bc9bd876-63c1-e647-ac86-fea0c3d0cf93" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.413 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" iface="eth0" netns="/var/run/netns/cni-bc9bd876-63c1-e647-ac86-fea0c3d0cf93" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.415 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" iface="eth0" netns="/var/run/netns/cni-bc9bd876-63c1-e647-ac86-fea0c3d0cf93" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.415 [INFO][4666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.415 [INFO][4666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.870 [INFO][4678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.874 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.875 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.895 [WARNING][4678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.895 [INFO][4678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.897 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:45.902917 containerd[1989]: 2025-09-12 17:37:45.899 [INFO][4666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:37:45.904337 containerd[1989]: time="2025-09-12T17:37:45.903201260Z" level=info msg="TearDown network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" successfully" Sep 12 17:37:45.904337 containerd[1989]: time="2025-09-12T17:37:45.903233071Z" level=info msg="StopPodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" returns successfully" Sep 12 17:37:45.908556 systemd[1]: run-netns-cni\x2dbc9bd876\x2d63c1\x2de647\x2dac86\x2dfea0c3d0cf93.mount: Deactivated successfully. Sep 12 17:37:46.013779 kubelet[3280]: I0912 17:37:46.013536 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4q97\" (UniqueName: \"kubernetes.io/projected/3cc441c2-b040-4172-b62f-169ce9d45b04-kube-api-access-g4q97\") pod \"3cc441c2-b040-4172-b62f-169ce9d45b04\" (UID: \"3cc441c2-b040-4172-b62f-169ce9d45b04\") " Sep 12 17:37:46.013779 kubelet[3280]: I0912 17:37:46.013726 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-backend-key-pair\") pod \"3cc441c2-b040-4172-b62f-169ce9d45b04\" (UID: \"3cc441c2-b040-4172-b62f-169ce9d45b04\") " Sep 12 17:37:46.013779 kubelet[3280]: I0912 17:37:46.013784 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-ca-bundle\") pod \"3cc441c2-b040-4172-b62f-169ce9d45b04\" (UID: \"3cc441c2-b040-4172-b62f-169ce9d45b04\") " Sep 12 17:37:46.026124 kubelet[3280]: I0912 17:37:46.023606 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc441c2-b040-4172-b62f-169ce9d45b04-kube-api-access-g4q97" (OuterVolumeSpecName: "kube-api-access-g4q97") pod "3cc441c2-b040-4172-b62f-169ce9d45b04" (UID: "3cc441c2-b040-4172-b62f-169ce9d45b04"). InnerVolumeSpecName "kube-api-access-g4q97". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:37:46.026805 systemd[1]: var-lib-kubelet-pods-3cc441c2\x2db040\x2d4172\x2db62f\x2d169ce9d45b04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4q97.mount: Deactivated successfully. Sep 12 17:37:46.037042 kubelet[3280]: I0912 17:37:46.036033 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3cc441c2-b040-4172-b62f-169ce9d45b04" (UID: "3cc441c2-b040-4172-b62f-169ce9d45b04"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:37:46.037042 kubelet[3280]: I0912 17:37:46.036097 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3cc441c2-b040-4172-b62f-169ce9d45b04" (UID: "3cc441c2-b040-4172-b62f-169ce9d45b04"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:37:46.037042 systemd[1]: var-lib-kubelet-pods-3cc441c2\x2db040\x2d4172\x2db62f\x2d169ce9d45b04-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:37:46.120887 kubelet[3280]: I0912 17:37:46.120839 3280 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-ca-bundle\") on node \"ip-172-31-19-87\" DevicePath \"\"" Sep 12 17:37:46.120887 kubelet[3280]: I0912 17:37:46.120900 3280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4q97\" (UniqueName: \"kubernetes.io/projected/3cc441c2-b040-4172-b62f-169ce9d45b04-kube-api-access-g4q97\") on node \"ip-172-31-19-87\" DevicePath \"\"" Sep 12 17:37:46.121080 kubelet[3280]: I0912 17:37:46.120919 3280 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3cc441c2-b040-4172-b62f-169ce9d45b04-whisker-backend-key-pair\") on node \"ip-172-31-19-87\" DevicePath \"\"" Sep 12 17:37:46.267675 systemd[1]: Removed slice kubepods-besteffort-pod3cc441c2_b040_4172_b62f_169ce9d45b04.slice - libcontainer container kubepods-besteffort-pod3cc441c2_b040_4172_b62f_169ce9d45b04.slice. Sep 12 17:37:46.644231 systemd[1]: run-containerd-runc-k8s.io-cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d-runc.jn9OSW.mount: Deactivated successfully. Sep 12 17:37:46.709814 kubelet[3280]: I0912 17:37:46.669770 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mgkrd" podStartSLOduration=3.827977072 podStartE2EDuration="21.62219772s" podCreationTimestamp="2025-09-12 17:37:25 +0000 UTC" firstStartedPulling="2025-09-12 17:37:26.480936495 +0000 UTC m=+20.776730111" lastFinishedPulling="2025-09-12 17:37:44.275157145 +0000 UTC m=+38.570950759" observedRunningTime="2025-09-12 17:37:45.629971114 +0000 UTC m=+39.925764740" watchObservedRunningTime="2025-09-12 17:37:46.62219772 +0000 UTC m=+40.917991346" Sep 12 17:37:46.857593 systemd[1]: Created slice kubepods-besteffort-podfdec53f6_03a6_49f9_a6e3_a02416e26b17.slice - libcontainer container kubepods-besteffort-podfdec53f6_03a6_49f9_a6e3_a02416e26b17.slice. Sep 12 17:37:46.864502 kubelet[3280]: I0912 17:37:46.863045 3280 status_manager.go:890] "Failed to get status for pod" podUID="fdec53f6-03a6-49f9-a6e3-a02416e26b17" pod="calico-system/whisker-9897556d5-q2wbn" err="pods \"whisker-9897556d5-q2wbn\" is forbidden: User \"system:node:ip-172-31-19-87\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-87' and this object" Sep 12 17:37:46.872247 kubelet[3280]: W0912 17:37:46.872212 3280 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-19-87" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-87' and this object Sep 12 17:37:46.872596 kubelet[3280]: E0912 17:37:46.872537 3280 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-19-87\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-87' and this object" logger="UnhandledError" Sep 12 17:37:46.935241 kubelet[3280]: I0912 17:37:46.935107 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6npm\" (UniqueName: \"kubernetes.io/projected/fdec53f6-03a6-49f9-a6e3-a02416e26b17-kube-api-access-r6npm\") pod \"whisker-9897556d5-q2wbn\" (UID: \"fdec53f6-03a6-49f9-a6e3-a02416e26b17\") " pod="calico-system/whisker-9897556d5-q2wbn" Sep 12 17:37:46.935241 kubelet[3280]: I0912 17:37:46.935172 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdec53f6-03a6-49f9-a6e3-a02416e26b17-whisker-ca-bundle\") pod \"whisker-9897556d5-q2wbn\" (UID: \"fdec53f6-03a6-49f9-a6e3-a02416e26b17\") " pod="calico-system/whisker-9897556d5-q2wbn" Sep 12 17:37:46.935241 kubelet[3280]: I0912 17:37:46.935205 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fdec53f6-03a6-49f9-a6e3-a02416e26b17-whisker-backend-key-pair\") pod \"whisker-9897556d5-q2wbn\" (UID: \"fdec53f6-03a6-49f9-a6e3-a02416e26b17\") " pod="calico-system/whisker-9897556d5-q2wbn" Sep 12 17:37:47.320848 kernel: bpftool[4853]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:37:47.574659 systemd-networkd[1820]: vxlan.calico: Link UP Sep 12 17:37:47.574672 systemd-networkd[1820]: vxlan.calico: Gained carrier Sep 12 17:37:47.577822 (udev-worker)[4643]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:37:47.626947 (udev-worker)[4644]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:37:47.627812 (udev-worker)[4901]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:37:47.778151 containerd[1989]: time="2025-09-12T17:37:47.778102362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9897556d5-q2wbn,Uid:fdec53f6-03a6-49f9-a6e3-a02416e26b17,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:48.049001 systemd-networkd[1820]: calidb57e8c2ce6: Link UP Sep 12 17:37:48.049259 systemd-networkd[1820]: calidb57e8c2ce6: Gained carrier Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.907 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0 whisker-9897556d5- calico-system fdec53f6-03a6-49f9-a6e3-a02416e26b17 925 0 2025-09-12 17:37:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9897556d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-87 whisker-9897556d5-q2wbn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb57e8c2ce6 [] [] }} ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.907 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.961 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" HandleID="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Workload="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.961 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" HandleID="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Workload="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f740), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-87", "pod":"whisker-9897556d5-q2wbn", "timestamp":"2025-09-12 17:37:47.961448791 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.961 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.961 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.961 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:47.979 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.002 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.010 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.013 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.017 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.017 [INFO][4929] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.019 [INFO][4929] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.027 [INFO][4929] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.036 [INFO][4929] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.193/26] block=192.168.51.192/26 handle="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.036 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.193/26] handle="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" host="ip-172-31-19-87" Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.037 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:48.086196 containerd[1989]: 2025-09-12 17:37:48.037 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.193/26] IPv6=[] ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" HandleID="k8s-pod-network.ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Workload="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.087483 containerd[1989]: 2025-09-12 17:37:48.042 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0", GenerateName:"whisker-9897556d5-", Namespace:"calico-system", SelfLink:"", UID:"fdec53f6-03a6-49f9-a6e3-a02416e26b17", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9897556d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"whisker-9897556d5-q2wbn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb57e8c2ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:48.087483 containerd[1989]: 2025-09-12 17:37:48.042 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.193/32] ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.087483 containerd[1989]: 2025-09-12 17:37:48.042 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb57e8c2ce6 ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.087483 containerd[1989]: 2025-09-12 17:37:48.052 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.087483 containerd[1989]: 2025-09-12 17:37:48.053 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0", GenerateName:"whisker-9897556d5-", Namespace:"calico-system", SelfLink:"", UID:"fdec53f6-03a6-49f9-a6e3-a02416e26b17", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9897556d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad", Pod:"whisker-9897556d5-q2wbn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb57e8c2ce6", MAC:"4a:16:73:a2:49:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:48.087483 containerd[1989]: 2025-09-12 17:37:48.078 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad" Namespace="calico-system" Pod="whisker-9897556d5-q2wbn" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--9897556d5--q2wbn-eth0" Sep 12 17:37:48.163493 containerd[1989]: time="2025-09-12T17:37:48.162211111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:48.163493 containerd[1989]: time="2025-09-12T17:37:48.162302539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:48.163493 containerd[1989]: time="2025-09-12T17:37:48.162328876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:48.167439 containerd[1989]: time="2025-09-12T17:37:48.167142847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:48.217721 systemd[1]: Started cri-containerd-ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad.scope - libcontainer container ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad. Sep 12 17:37:48.248261 kubelet[3280]: I0912 17:37:48.247263 3280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc441c2-b040-4172-b62f-169ce9d45b04" path="/var/lib/kubelet/pods/3cc441c2-b040-4172-b62f-169ce9d45b04/volumes" Sep 12 17:37:48.325433 containerd[1989]: time="2025-09-12T17:37:48.325324283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9897556d5-q2wbn,Uid:fdec53f6-03a6-49f9-a6e3-a02416e26b17,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad\"" Sep 12 17:37:48.328201 containerd[1989]: time="2025-09-12T17:37:48.327986628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:37:48.829962 systemd-networkd[1820]: vxlan.calico: Gained IPv6LL Sep 12 17:37:49.238431 containerd[1989]: time="2025-09-12T17:37:49.238282030Z" level=info msg="StopPodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\"" Sep 12 17:37:49.247230 containerd[1989]: time="2025-09-12T17:37:49.247188046Z" level=info msg="StopPodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\"" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.322 [INFO][5039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.322 [INFO][5039] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" iface="eth0" netns="/var/run/netns/cni-60c0bd69-0458-d113-da64-07c006941e17" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.323 [INFO][5039] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" iface="eth0" netns="/var/run/netns/cni-60c0bd69-0458-d113-da64-07c006941e17" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.323 [INFO][5039] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" iface="eth0" netns="/var/run/netns/cni-60c0bd69-0458-d113-da64-07c006941e17" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.323 [INFO][5039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.324 [INFO][5039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.376 [INFO][5053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.376 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.376 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.392 [WARNING][5053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.392 [INFO][5053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.395 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:49.410381 containerd[1989]: 2025-09-12 17:37:49.406 [INFO][5039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:37:49.413923 containerd[1989]: time="2025-09-12T17:37:49.413872746Z" level=info msg="TearDown network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" successfully" Sep 12 17:37:49.413923 containerd[1989]: time="2025-09-12T17:37:49.413922035Z" level=info msg="StopPodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" returns successfully" Sep 12 17:37:49.416546 containerd[1989]: time="2025-09-12T17:37:49.416130199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnq78,Uid:47dd082d-8313-4b06-a25a-46c1ffeb1afd,Namespace:calico-system,Attempt:1,}" Sep 12 17:37:49.416152 systemd[1]: run-netns-cni\x2d60c0bd69\x2d0458\x2dd113\x2dda64\x2d07c006941e17.mount: Deactivated successfully. Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.320 [INFO][5038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.321 [INFO][5038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" iface="eth0" netns="/var/run/netns/cni-9ae9951b-f8aa-adde-655a-5ac703b21fa0" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.322 [INFO][5038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" iface="eth0" netns="/var/run/netns/cni-9ae9951b-f8aa-adde-655a-5ac703b21fa0" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.322 [INFO][5038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" iface="eth0" netns="/var/run/netns/cni-9ae9951b-f8aa-adde-655a-5ac703b21fa0" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.322 [INFO][5038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.322 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.385 [INFO][5051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.387 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.395 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.406 [WARNING][5051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.406 [INFO][5051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.409 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:49.425223 containerd[1989]: 2025-09-12 17:37:49.420 [INFO][5038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:37:49.428488 containerd[1989]: time="2025-09-12T17:37:49.425920711Z" level=info msg="TearDown network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" successfully" Sep 12 17:37:49.428488 containerd[1989]: time="2025-09-12T17:37:49.425953985Z" level=info msg="StopPodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" returns successfully" Sep 12 17:37:49.430670 containerd[1989]: time="2025-09-12T17:37:49.429595581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-g6gkc,Uid:e49d7f36-b7fa-455a-b787-1fea47393279,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:37:49.431144 systemd[1]: run-netns-cni\x2d9ae9951b\x2df8aa\x2dadde\x2d655a\x2d5ac703b21fa0.mount: Deactivated successfully. Sep 12 17:37:49.534164 systemd-networkd[1820]: calidb57e8c2ce6: Gained IPv6LL Sep 12 17:37:49.692831 systemd-networkd[1820]: cali0150c16cd66: Link UP Sep 12 17:37:49.695194 systemd-networkd[1820]: cali0150c16cd66: Gained carrier Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.515 [INFO][5066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0 csi-node-driver- calico-system 47dd082d-8313-4b06-a25a-46c1ffeb1afd 940 0 2025-09-12 17:37:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-87 csi-node-driver-hnq78 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0150c16cd66 [] [] }} ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.516 [INFO][5066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.592 [INFO][5089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" HandleID="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.592 [INFO][5089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" HandleID="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-87", "pod":"csi-node-driver-hnq78", "timestamp":"2025-09-12 17:37:49.591771571 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.593 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.593 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.593 [INFO][5089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.608 [INFO][5089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.621 [INFO][5089] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.629 [INFO][5089] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.635 [INFO][5089] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.639 [INFO][5089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.641 [INFO][5089] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.646 [INFO][5089] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8 Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.658 [INFO][5089] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.673 [INFO][5089] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.194/26] block=192.168.51.192/26 handle="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.673 [INFO][5089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.194/26] handle="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" host="ip-172-31-19-87" Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.674 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:49.752693 containerd[1989]: 2025-09-12 17:37:49.674 [INFO][5089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.194/26] IPv6=[] ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" HandleID="k8s-pod-network.a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.753707 containerd[1989]: 2025-09-12 17:37:49.680 [INFO][5066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47dd082d-8313-4b06-a25a-46c1ffeb1afd", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"csi-node-driver-hnq78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0150c16cd66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:49.753707 containerd[1989]: 2025-09-12 17:37:49.681 [INFO][5066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.194/32] ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.753707 containerd[1989]: 2025-09-12 17:37:49.681 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0150c16cd66 ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.753707 containerd[1989]: 2025-09-12 17:37:49.707 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.753707 containerd[1989]: 2025-09-12 17:37:49.712 [INFO][5066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47dd082d-8313-4b06-a25a-46c1ffeb1afd", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8", Pod:"csi-node-driver-hnq78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0150c16cd66", MAC:"66:bb:44:52:a8:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:49.753707 containerd[1989]: 2025-09-12 17:37:49.741 [INFO][5066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8" Namespace="calico-system" Pod="csi-node-driver-hnq78" WorkloadEndpoint="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:37:49.809441 systemd-networkd[1820]: cali7a41b00f6e0: Link UP Sep 12 17:37:49.816227 systemd-networkd[1820]: cali7a41b00f6e0: Gained carrier Sep 12 17:37:49.829437 containerd[1989]: time="2025-09-12T17:37:49.829322028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:49.831809 containerd[1989]: time="2025-09-12T17:37:49.829621677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:49.831809 containerd[1989]: time="2025-09-12T17:37:49.829652421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:49.831809 containerd[1989]: time="2025-09-12T17:37:49.829957383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:49.846984 containerd[1989]: time="2025-09-12T17:37:49.845540431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:49.855820 containerd[1989]: time="2025-09-12T17:37:49.854708882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:37:49.855820 containerd[1989]: time="2025-09-12T17:37:49.855661333Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.609 [INFO][5083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0 calico-apiserver-7c76cc97b8- calico-apiserver e49d7f36-b7fa-455a-b787-1fea47393279 941 0 2025-09-12 17:37:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c76cc97b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-87 calico-apiserver-7c76cc97b8-g6gkc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a41b00f6e0 [] [] }} ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.609 [INFO][5083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.695 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" HandleID="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.696 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" HandleID="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-87", "pod":"calico-apiserver-7c76cc97b8-g6gkc", "timestamp":"2025-09-12 17:37:49.695562485 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.696 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.696 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.696 [INFO][5098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.712 [INFO][5098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.729 [INFO][5098] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.741 [INFO][5098] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.745 [INFO][5098] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.753 [INFO][5098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.754 [INFO][5098] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.759 [INFO][5098] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844 Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.771 [INFO][5098] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.789 [INFO][5098] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.195/26] block=192.168.51.192/26 handle="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.790 [INFO][5098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.195/26] handle="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" host="ip-172-31-19-87" Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.790 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:49.860145 containerd[1989]: 2025-09-12 17:37:49.790 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.195/26] IPv6=[] ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" HandleID="k8s-pod-network.63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.861117 containerd[1989]: 2025-09-12 17:37:49.797 [INFO][5083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49d7f36-b7fa-455a-b787-1fea47393279", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"calico-apiserver-7c76cc97b8-g6gkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a41b00f6e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:49.861117 containerd[1989]: 2025-09-12 17:37:49.798 [INFO][5083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.195/32] ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.861117 containerd[1989]: 2025-09-12 17:37:49.798 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a41b00f6e0 ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.861117 containerd[1989]: 2025-09-12 17:37:49.815 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.861117 containerd[1989]: 2025-09-12 17:37:49.819 [INFO][5083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49d7f36-b7fa-455a-b787-1fea47393279", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844", Pod:"calico-apiserver-7c76cc97b8-g6gkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a41b00f6e0", MAC:"96:a8:cf:29:f1:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:49.861117 containerd[1989]: 2025-09-12 17:37:49.850 [INFO][5083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-g6gkc" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:37:49.868386 containerd[1989]: time="2025-09-12T17:37:49.868336113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:49.871394 containerd[1989]: time="2025-09-12T17:37:49.871265926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.543138127s" Sep 12 17:37:49.871822 containerd[1989]: time="2025-09-12T17:37:49.871405272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:37:49.878824 containerd[1989]: time="2025-09-12T17:37:49.878662621Z" level=info msg="CreateContainer within sandbox \"ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:37:49.893025 systemd[1]: Started cri-containerd-a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8.scope - libcontainer container a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8. Sep 12 17:37:49.906694 containerd[1989]: time="2025-09-12T17:37:49.906649164Z" level=info msg="CreateContainer within sandbox \"ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"adafad6ff6c8901b8832be8b4fc9f4d237e8e152261ec62f8832a09de4f37be3\"" Sep 12 17:37:49.908517 containerd[1989]: time="2025-09-12T17:37:49.908485462Z" level=info msg="StartContainer for \"adafad6ff6c8901b8832be8b4fc9f4d237e8e152261ec62f8832a09de4f37be3\"" Sep 12 17:37:49.944466 containerd[1989]: time="2025-09-12T17:37:49.944342922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:49.944760 containerd[1989]: time="2025-09-12T17:37:49.944723168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:49.945013 containerd[1989]: time="2025-09-12T17:37:49.944936040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:49.945644 containerd[1989]: time="2025-09-12T17:37:49.945500344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:49.973887 containerd[1989]: time="2025-09-12T17:37:49.973758758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnq78,Uid:47dd082d-8313-4b06-a25a-46c1ffeb1afd,Namespace:calico-system,Attempt:1,} returns sandbox id \"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8\"" Sep 12 17:37:49.977041 containerd[1989]: time="2025-09-12T17:37:49.976837115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:37:49.999040 systemd[1]: Started cri-containerd-63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844.scope - libcontainer container 63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844. Sep 12 17:37:50.010509 systemd[1]: Started cri-containerd-adafad6ff6c8901b8832be8b4fc9f4d237e8e152261ec62f8832a09de4f37be3.scope - libcontainer container adafad6ff6c8901b8832be8b4fc9f4d237e8e152261ec62f8832a09de4f37be3. Sep 12 17:37:50.101844 containerd[1989]: time="2025-09-12T17:37:50.100243795Z" level=info msg="StartContainer for \"adafad6ff6c8901b8832be8b4fc9f4d237e8e152261ec62f8832a09de4f37be3\" returns successfully" Sep 12 17:37:50.107070 containerd[1989]: time="2025-09-12T17:37:50.107020482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-g6gkc,Uid:e49d7f36-b7fa-455a-b787-1fea47393279,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844\"" Sep 12 17:37:50.241005 containerd[1989]: time="2025-09-12T17:37:50.240947276Z" level=info msg="StopPodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\"" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.300 [INFO][5250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.300 [INFO][5250] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" iface="eth0" netns="/var/run/netns/cni-e2725a22-7946-f170-487b-1fb627f3f44b" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.300 [INFO][5250] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" iface="eth0" netns="/var/run/netns/cni-e2725a22-7946-f170-487b-1fb627f3f44b" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.302 [INFO][5250] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" iface="eth0" netns="/var/run/netns/cni-e2725a22-7946-f170-487b-1fb627f3f44b" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.302 [INFO][5250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.302 [INFO][5250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.335 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.337 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.337 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.362 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.362 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.375 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.383737 containerd[1989]: 2025-09-12 17:37:50.379 [INFO][5250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:37:50.383737 containerd[1989]: time="2025-09-12T17:37:50.383510902Z" level=info msg="TearDown network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" successfully" Sep 12 17:37:50.383737 containerd[1989]: time="2025-09-12T17:37:50.383543525Z" level=info msg="StopPodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" returns successfully" Sep 12 17:37:50.385398 containerd[1989]: time="2025-09-12T17:37:50.384378446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5kcf7,Uid:f13ccb5e-3497-40e3-9a44-1d13e18105b6,Namespace:calico-system,Attempt:1,}" Sep 12 17:37:50.429361 systemd[1]: run-netns-cni\x2de2725a22\x2d7946\x2df170\x2d487b\x2d1fb627f3f44b.mount: Deactivated successfully. Sep 12 17:37:50.564662 systemd[1]: Started sshd@9-172.31.19.87:22-147.75.109.163:45356.service - OpenSSH per-connection server daemon (147.75.109.163:45356). Sep 12 17:37:50.653470 systemd-networkd[1820]: califac71f4df51: Link UP Sep 12 17:37:50.653716 systemd-networkd[1820]: califac71f4df51: Gained carrier Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.503 [INFO][5267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0 goldmane-54d579b49d- calico-system f13ccb5e-3497-40e3-9a44-1d13e18105b6 965 0 2025-09-12 17:37:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-87 goldmane-54d579b49d-5kcf7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califac71f4df51 [] [] }} ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.504 [INFO][5267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.558 [INFO][5278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" HandleID="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.558 [INFO][5278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" HandleID="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f970), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-87", "pod":"goldmane-54d579b49d-5kcf7", "timestamp":"2025-09-12 17:37:50.558724156 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.559 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.559 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.559 [INFO][5278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.571 [INFO][5278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.581 [INFO][5278] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.589 [INFO][5278] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.597 [INFO][5278] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.612 [INFO][5278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.612 [INFO][5278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.618 [INFO][5278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190 Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.630 [INFO][5278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.644 [INFO][5278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.196/26] block=192.168.51.192/26 handle="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.644 [INFO][5278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.196/26] handle="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" host="ip-172-31-19-87" Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.644 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.680279 containerd[1989]: 2025-09-12 17:37:50.644 [INFO][5278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.196/26] IPv6=[] ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" HandleID="k8s-pod-network.54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.685148 containerd[1989]: 2025-09-12 17:37:50.647 [INFO][5267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f13ccb5e-3497-40e3-9a44-1d13e18105b6", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"goldmane-54d579b49d-5kcf7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califac71f4df51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:50.685148 containerd[1989]: 2025-09-12 17:37:50.647 [INFO][5267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.196/32] ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.685148 containerd[1989]: 2025-09-12 17:37:50.647 [INFO][5267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califac71f4df51 ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.685148 containerd[1989]: 2025-09-12 17:37:50.652 [INFO][5267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.685148 containerd[1989]: 2025-09-12 17:37:50.653 [INFO][5267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f13ccb5e-3497-40e3-9a44-1d13e18105b6", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190", Pod:"goldmane-54d579b49d-5kcf7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califac71f4df51", MAC:"ca:e2:65:32:2f:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:50.685148 containerd[1989]: 2025-09-12 17:37:50.673 [INFO][5267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190" Namespace="calico-system" Pod="goldmane-54d579b49d-5kcf7" WorkloadEndpoint="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:37:50.721726 containerd[1989]: time="2025-09-12T17:37:50.721367924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:50.721726 containerd[1989]: time="2025-09-12T17:37:50.721434645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:50.721726 containerd[1989]: time="2025-09-12T17:37:50.721449961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:50.721726 containerd[1989]: time="2025-09-12T17:37:50.721545701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:50.762410 systemd[1]: run-containerd-runc-k8s.io-54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190-runc.BoDed9.mount: Deactivated successfully. Sep 12 17:37:50.777011 systemd[1]: Started cri-containerd-54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190.scope - libcontainer container 54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190. Sep 12 17:37:50.815598 systemd-networkd[1820]: cali0150c16cd66: Gained IPv6LL Sep 12 17:37:50.825542 sshd[5284]: Accepted publickey for core from 147.75.109.163 port 45356 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:50.830949 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:50.851182 systemd-logind[1960]: New session 10 of user core. Sep 12 17:37:50.853344 containerd[1989]: time="2025-09-12T17:37:50.853302993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5kcf7,Uid:f13ccb5e-3497-40e3-9a44-1d13e18105b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190\"" Sep 12 17:37:50.857303 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:37:51.199431 systemd-networkd[1820]: cali7a41b00f6e0: Gained IPv6LL Sep 12 17:37:51.241992 containerd[1989]: time="2025-09-12T17:37:51.241147237Z" level=info msg="StopPodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\"" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.389 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.389 [INFO][5362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" iface="eth0" netns="/var/run/netns/cni-46725a95-2428-c691-6497-1380a9178c3b" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.389 [INFO][5362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" iface="eth0" netns="/var/run/netns/cni-46725a95-2428-c691-6497-1380a9178c3b" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.390 [INFO][5362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" iface="eth0" netns="/var/run/netns/cni-46725a95-2428-c691-6497-1380a9178c3b" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.390 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.390 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.458 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.459 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.459 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.471 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.471 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.474 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:51.489622 containerd[1989]: 2025-09-12 17:37:51.481 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:37:51.489622 containerd[1989]: time="2025-09-12T17:37:51.486410805Z" level=info msg="TearDown network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" successfully" Sep 12 17:37:51.489622 containerd[1989]: time="2025-09-12T17:37:51.486535286Z" level=info msg="StopPodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" returns successfully" Sep 12 17:37:51.492163 containerd[1989]: time="2025-09-12T17:37:51.491184040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dr46r,Uid:8306a462-dd92-4fcd-bcfb-5dc368adabea,Namespace:kube-system,Attempt:1,}" Sep 12 17:37:51.493173 systemd[1]: run-netns-cni\x2d46725a95\x2d2428\x2dc691\x2d6497\x2d1380a9178c3b.mount: Deactivated successfully. Sep 12 17:37:51.703346 sshd[5284]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:51.710643 systemd[1]: sshd@9-172.31.19.87:22-147.75.109.163:45356.service: Deactivated successfully. Sep 12 17:37:51.710988 systemd-logind[1960]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:37:51.716660 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:37:51.722977 systemd-logind[1960]: Removed session 10. Sep 12 17:37:51.742965 containerd[1989]: time="2025-09-12T17:37:51.742823052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.744943 containerd[1989]: time="2025-09-12T17:37:51.744860407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:37:51.747337 containerd[1989]: time="2025-09-12T17:37:51.747283706Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.750786 containerd[1989]: time="2025-09-12T17:37:51.750723850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.752007 containerd[1989]: time="2025-09-12T17:37:51.751758642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.774886135s" Sep 12 17:37:51.752007 containerd[1989]: time="2025-09-12T17:37:51.751825789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:37:51.754390 containerd[1989]: time="2025-09-12T17:37:51.754348618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:37:51.756328 containerd[1989]: time="2025-09-12T17:37:51.756143574Z" level=info msg="CreateContainer within sandbox \"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:37:51.797410 containerd[1989]: time="2025-09-12T17:37:51.797245561Z" level=info msg="CreateContainer within sandbox \"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"64ec727e86ed21a50a98f8fd30943d7a44d4df6923a454bb1fd211c2e3d300fe\"" Sep 12 17:37:51.798127 containerd[1989]: time="2025-09-12T17:37:51.798061824Z" level=info msg="StartContainer for \"64ec727e86ed21a50a98f8fd30943d7a44d4df6923a454bb1fd211c2e3d300fe\"" Sep 12 17:37:51.806220 systemd-networkd[1820]: cali6fc6ae3c88a: Link UP Sep 12 17:37:51.809977 systemd-networkd[1820]: cali6fc6ae3c88a: Gained carrier Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.671 [INFO][5376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0 coredns-668d6bf9bc- kube-system 8306a462-dd92-4fcd-bcfb-5dc368adabea 996 0 2025-09-12 17:37:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-87 coredns-668d6bf9bc-dr46r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6fc6ae3c88a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.672 [INFO][5376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.730 [INFO][5390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" HandleID="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.730 [INFO][5390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" HandleID="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fdb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-87", "pod":"coredns-668d6bf9bc-dr46r", "timestamp":"2025-09-12 17:37:51.729910961 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.731 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.731 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.731 [INFO][5390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.739 [INFO][5390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.746 [INFO][5390] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.753 [INFO][5390] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.760 [INFO][5390] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.766 [INFO][5390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.766 [INFO][5390] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.772 [INFO][5390] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072 Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.779 [INFO][5390] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.793 [INFO][5390] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.197/26] block=192.168.51.192/26 handle="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.793 [INFO][5390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.197/26] handle="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" host="ip-172-31-19-87" Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.793 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:51.862941 containerd[1989]: 2025-09-12 17:37:51.793 [INFO][5390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.197/26] IPv6=[] ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" HandleID="k8s-pod-network.98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.863924 containerd[1989]: 2025-09-12 17:37:51.799 [INFO][5376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8306a462-dd92-4fcd-bcfb-5dc368adabea", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"coredns-668d6bf9bc-dr46r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc6ae3c88a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.863924 containerd[1989]: 2025-09-12 17:37:51.799 [INFO][5376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.197/32] ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.863924 containerd[1989]: 2025-09-12 17:37:51.799 [INFO][5376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc6ae3c88a ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.863924 containerd[1989]: 2025-09-12 17:37:51.810 [INFO][5376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.863924 containerd[1989]: 2025-09-12 17:37:51.812 [INFO][5376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8306a462-dd92-4fcd-bcfb-5dc368adabea", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072", Pod:"coredns-668d6bf9bc-dr46r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc6ae3c88a", MAC:"c6:40:5b:08:a5:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.863924 containerd[1989]: 2025-09-12 17:37:51.853 [INFO][5376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072" Namespace="kube-system" Pod="coredns-668d6bf9bc-dr46r" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:37:51.888991 systemd[1]: Started cri-containerd-64ec727e86ed21a50a98f8fd30943d7a44d4df6923a454bb1fd211c2e3d300fe.scope - libcontainer container 64ec727e86ed21a50a98f8fd30943d7a44d4df6923a454bb1fd211c2e3d300fe. Sep 12 17:37:51.914618 containerd[1989]: time="2025-09-12T17:37:51.914075211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:51.914618 containerd[1989]: time="2025-09-12T17:37:51.914178513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:51.914618 containerd[1989]: time="2025-09-12T17:37:51.914202949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.914618 containerd[1989]: time="2025-09-12T17:37:51.914330600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.943220 systemd[1]: Started cri-containerd-98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072.scope - libcontainer container 98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072. Sep 12 17:37:51.979129 containerd[1989]: time="2025-09-12T17:37:51.978729650Z" level=info msg="StartContainer for \"64ec727e86ed21a50a98f8fd30943d7a44d4df6923a454bb1fd211c2e3d300fe\" returns successfully" Sep 12 17:37:52.027053 containerd[1989]: time="2025-09-12T17:37:52.027003067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dr46r,Uid:8306a462-dd92-4fcd-bcfb-5dc368adabea,Namespace:kube-system,Attempt:1,} returns sandbox id \"98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072\"" Sep 12 17:37:52.030061 containerd[1989]: time="2025-09-12T17:37:52.030016123Z" level=info msg="CreateContainer within sandbox \"98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:37:52.057407 containerd[1989]: time="2025-09-12T17:37:52.057358643Z" level=info msg="CreateContainer within sandbox \"98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93d1959725e230e6408a2394aa3dbee72bfd52794464466df0efd722968b952a\"" Sep 12 17:37:52.075014 containerd[1989]: time="2025-09-12T17:37:52.074654089Z" level=info msg="StartContainer for \"93d1959725e230e6408a2394aa3dbee72bfd52794464466df0efd722968b952a\"" Sep 12 17:37:52.094067 systemd-networkd[1820]: califac71f4df51: Gained IPv6LL Sep 12 17:37:52.108049 systemd[1]: Started cri-containerd-93d1959725e230e6408a2394aa3dbee72bfd52794464466df0efd722968b952a.scope - libcontainer container 93d1959725e230e6408a2394aa3dbee72bfd52794464466df0efd722968b952a. Sep 12 17:37:52.166026 containerd[1989]: time="2025-09-12T17:37:52.165963544Z" level=info msg="StartContainer for \"93d1959725e230e6408a2394aa3dbee72bfd52794464466df0efd722968b952a\" returns successfully" Sep 12 17:37:52.240021 containerd[1989]: time="2025-09-12T17:37:52.239733540Z" level=info msg="StopPodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\"" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.305 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.305 [INFO][5525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" iface="eth0" netns="/var/run/netns/cni-4ac608df-709b-b061-3104-f594a00813b4" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.306 [INFO][5525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" iface="eth0" netns="/var/run/netns/cni-4ac608df-709b-b061-3104-f594a00813b4" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.307 [INFO][5525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" iface="eth0" netns="/var/run/netns/cni-4ac608df-709b-b061-3104-f594a00813b4" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.307 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.307 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.339 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.339 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.339 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.346 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.346 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.348 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:52.353392 containerd[1989]: 2025-09-12 17:37:52.350 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:37:52.356242 containerd[1989]: time="2025-09-12T17:37:52.354109063Z" level=info msg="TearDown network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" successfully" Sep 12 17:37:52.356242 containerd[1989]: time="2025-09-12T17:37:52.354191060Z" level=info msg="StopPodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" returns successfully" Sep 12 17:37:52.356242 containerd[1989]: time="2025-09-12T17:37:52.355329684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbsml,Uid:1e120bb6-e43d-4b4a-912f-d1447d2d9f1e,Namespace:kube-system,Attempt:1,}" Sep 12 17:37:52.493935 systemd[1]: run-netns-cni\x2d4ac608df\x2d709b\x2db061\x2d3104\x2df594a00813b4.mount: Deactivated successfully. Sep 12 17:37:52.571767 systemd-networkd[1820]: calicbcc949d133: Link UP Sep 12 17:37:52.572143 systemd-networkd[1820]: calicbcc949d133: Gained carrier Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.485 [INFO][5541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0 coredns-668d6bf9bc- kube-system 1e120bb6-e43d-4b4a-912f-d1447d2d9f1e 1016 0 2025-09-12 17:37:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-87 coredns-668d6bf9bc-cbsml eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbcc949d133 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.485 [INFO][5541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.521 [INFO][5555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" HandleID="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.521 [INFO][5555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" HandleID="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd930), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-87", "pod":"coredns-668d6bf9bc-cbsml", "timestamp":"2025-09-12 17:37:52.521359814 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.521 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.521 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.521 [INFO][5555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.528 [INFO][5555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.533 [INFO][5555] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.538 [INFO][5555] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.540 [INFO][5555] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.545 [INFO][5555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.545 [INFO][5555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.547 [INFO][5555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58 Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.552 [INFO][5555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.564 [INFO][5555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.198/26] block=192.168.51.192/26 handle="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.564 [INFO][5555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.198/26] handle="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" host="ip-172-31-19-87" Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.564 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:52.597521 containerd[1989]: 2025-09-12 17:37:52.564 [INFO][5555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.198/26] IPv6=[] ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" HandleID="k8s-pod-network.945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.600488 containerd[1989]: 2025-09-12 17:37:52.567 [INFO][5541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"coredns-668d6bf9bc-cbsml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbcc949d133", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:52.600488 containerd[1989]: 2025-09-12 17:37:52.567 [INFO][5541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.198/32] ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.600488 containerd[1989]: 2025-09-12 17:37:52.568 [INFO][5541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbcc949d133 ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.600488 containerd[1989]: 2025-09-12 17:37:52.571 [INFO][5541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.600488 containerd[1989]: 2025-09-12 17:37:52.573 [INFO][5541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58", Pod:"coredns-668d6bf9bc-cbsml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbcc949d133", MAC:"6a:56:1c:8e:47:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:52.600488 containerd[1989]: 2025-09-12 17:37:52.593 [INFO][5541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbsml" WorkloadEndpoint="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:37:52.639276 containerd[1989]: time="2025-09-12T17:37:52.638759399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:52.639276 containerd[1989]: time="2025-09-12T17:37:52.638932900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:52.641748 containerd[1989]: time="2025-09-12T17:37:52.638966371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:52.641748 containerd[1989]: time="2025-09-12T17:37:52.641608492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:52.697060 systemd[1]: run-containerd-runc-k8s.io-945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58-runc.hDVltC.mount: Deactivated successfully. Sep 12 17:37:52.708936 systemd[1]: Started cri-containerd-945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58.scope - libcontainer container 945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58. Sep 12 17:37:52.737644 kubelet[3280]: I0912 17:37:52.737515 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dr46r" podStartSLOduration=42.737489588 podStartE2EDuration="42.737489588s" podCreationTimestamp="2025-09-12 17:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:52.713870629 +0000 UTC m=+47.009664254" watchObservedRunningTime="2025-09-12 17:37:52.737489588 +0000 UTC m=+47.033283214" Sep 12 17:37:52.816632 containerd[1989]: time="2025-09-12T17:37:52.816571733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbsml,Uid:1e120bb6-e43d-4b4a-912f-d1447d2d9f1e,Namespace:kube-system,Attempt:1,} returns sandbox id \"945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58\"" Sep 12 17:37:52.821764 containerd[1989]: time="2025-09-12T17:37:52.821425537Z" level=info msg="CreateContainer within sandbox \"945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:37:52.848987 containerd[1989]: time="2025-09-12T17:37:52.848420459Z" level=info msg="CreateContainer within sandbox \"945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89bb170e88c384e4936b92fdd3230f4f147f2a275655f56da554c948b995eae4\"" Sep 12 17:37:52.849853 containerd[1989]: time="2025-09-12T17:37:52.849243142Z" level=info msg="StartContainer for \"89bb170e88c384e4936b92fdd3230f4f147f2a275655f56da554c948b995eae4\"" Sep 12 17:37:52.888040 systemd[1]: Started cri-containerd-89bb170e88c384e4936b92fdd3230f4f147f2a275655f56da554c948b995eae4.scope - libcontainer container 89bb170e88c384e4936b92fdd3230f4f147f2a275655f56da554c948b995eae4. Sep 12 17:37:52.920425 containerd[1989]: time="2025-09-12T17:37:52.920243092Z" level=info msg="StartContainer for \"89bb170e88c384e4936b92fdd3230f4f147f2a275655f56da554c948b995eae4\" returns successfully" Sep 12 17:37:53.246933 containerd[1989]: time="2025-09-12T17:37:53.244823546Z" level=info msg="StopPodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\"" Sep 12 17:37:53.247452 systemd-networkd[1820]: cali6fc6ae3c88a: Gained IPv6LL Sep 12 17:37:53.269949 containerd[1989]: time="2025-09-12T17:37:53.269899302Z" level=info msg="StopPodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\"" Sep 12 17:37:53.499913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611211237.mount: Deactivated successfully. Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.427 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.427 [INFO][5685] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" iface="eth0" netns="/var/run/netns/cni-363eab11-0d14-1d6e-a505-893584a4444f" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.427 [INFO][5685] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" iface="eth0" netns="/var/run/netns/cni-363eab11-0d14-1d6e-a505-893584a4444f" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.427 [INFO][5685] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" iface="eth0" netns="/var/run/netns/cni-363eab11-0d14-1d6e-a505-893584a4444f" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.427 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.427 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.491 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.491 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.491 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.511 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.511 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.515 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:53.529525 containerd[1989]: 2025-09-12 17:37:53.523 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:37:53.533243 containerd[1989]: time="2025-09-12T17:37:53.532842584Z" level=info msg="TearDown network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" successfully" Sep 12 17:37:53.533243 containerd[1989]: time="2025-09-12T17:37:53.532891026Z" level=info msg="StopPodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" returns successfully" Sep 12 17:37:53.539095 containerd[1989]: time="2025-09-12T17:37:53.539047918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68b6db8c-kwzx4,Uid:bbd39c69-a865-4ebd-9b70-4fae310ae712,Namespace:calico-system,Attempt:1,}" Sep 12 17:37:53.540642 systemd[1]: run-netns-cni\x2d363eab11\x2d0d14\x2d1d6e\x2da505\x2d893584a4444f.mount: Deactivated successfully. Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.471 [INFO][5684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.472 [INFO][5684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" iface="eth0" netns="/var/run/netns/cni-cb5141ed-6093-929c-2b4c-08d206fa4edc" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.475 [INFO][5684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" iface="eth0" netns="/var/run/netns/cni-cb5141ed-6093-929c-2b4c-08d206fa4edc" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.475 [INFO][5684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" iface="eth0" netns="/var/run/netns/cni-cb5141ed-6093-929c-2b4c-08d206fa4edc" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.475 [INFO][5684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.475 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.552 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.553 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.553 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.563 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.563 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.571 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:53.612442 containerd[1989]: 2025-09-12 17:37:53.592 [INFO][5684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:37:53.613738 containerd[1989]: time="2025-09-12T17:37:53.613484439Z" level=info msg="TearDown network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" successfully" Sep 12 17:37:53.613738 containerd[1989]: time="2025-09-12T17:37:53.613531105Z" level=info msg="StopPodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" returns successfully" Sep 12 17:37:53.616089 containerd[1989]: time="2025-09-12T17:37:53.615977207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-jfrtm,Uid:0d1a0c17-6003-426a-b7b3-9d6d213504ae,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:37:53.737118 kubelet[3280]: I0912 17:37:53.737034 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cbsml" podStartSLOduration=43.737008612 podStartE2EDuration="43.737008612s" podCreationTimestamp="2025-09-12 17:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:53.736298805 +0000 UTC m=+48.032092440" watchObservedRunningTime="2025-09-12 17:37:53.737008612 +0000 UTC m=+48.032802236" Sep 12 17:37:54.018011 systemd-networkd[1820]: calicbcc949d133: Gained IPv6LL Sep 12 17:37:54.113293 systemd-networkd[1820]: cali13a99d43463: Link UP Sep 12 17:37:54.116494 systemd-networkd[1820]: cali13a99d43463: Gained carrier Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.816 [INFO][5727] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0 calico-apiserver-7c76cc97b8- calico-apiserver 0d1a0c17-6003-426a-b7b3-9d6d213504ae 1034 0 2025-09-12 17:37:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c76cc97b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-87 calico-apiserver-7c76cc97b8-jfrtm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13a99d43463 [] [] }} ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.817 [INFO][5727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.965 [INFO][5748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" HandleID="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.965 [INFO][5748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" HandleID="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122b10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-87", "pod":"calico-apiserver-7c76cc97b8-jfrtm", "timestamp":"2025-09-12 17:37:53.96509927 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.965 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.965 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.965 [INFO][5748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.980 [INFO][5748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:53.991 [INFO][5748] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.008 [INFO][5748] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.012 [INFO][5748] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.026 [INFO][5748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.026 [INFO][5748] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.037 [INFO][5748] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96 Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.066 [INFO][5748] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.085 [INFO][5748] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.199/26] block=192.168.51.192/26 handle="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.085 [INFO][5748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.199/26] handle="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" host="ip-172-31-19-87" Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.086 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:54.169599 containerd[1989]: 2025-09-12 17:37:54.086 [INFO][5748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.199/26] IPv6=[] ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" HandleID="k8s-pod-network.38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.172121 containerd[1989]: 2025-09-12 17:37:54.099 [INFO][5727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d1a0c17-6003-426a-b7b3-9d6d213504ae", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"calico-apiserver-7c76cc97b8-jfrtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13a99d43463", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:54.172121 containerd[1989]: 2025-09-12 17:37:54.100 [INFO][5727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.199/32] ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.172121 containerd[1989]: 2025-09-12 17:37:54.100 [INFO][5727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13a99d43463 ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.172121 containerd[1989]: 2025-09-12 17:37:54.124 [INFO][5727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.172121 containerd[1989]: 2025-09-12 17:37:54.127 [INFO][5727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d1a0c17-6003-426a-b7b3-9d6d213504ae", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96", Pod:"calico-apiserver-7c76cc97b8-jfrtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13a99d43463", MAC:"56:f9:a9:65:3d:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:54.172121 containerd[1989]: 2025-09-12 17:37:54.158 [INFO][5727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96" Namespace="calico-apiserver" Pod="calico-apiserver-7c76cc97b8-jfrtm" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:37:54.265716 containerd[1989]: time="2025-09-12T17:37:54.261983747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:54.265716 containerd[1989]: time="2025-09-12T17:37:54.262088694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:54.265716 containerd[1989]: time="2025-09-12T17:37:54.262115454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:54.265716 containerd[1989]: time="2025-09-12T17:37:54.262239037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:54.297353 systemd-networkd[1820]: cali7565a49d9e4: Link UP Sep 12 17:37:54.304361 systemd-networkd[1820]: cali7565a49d9e4: Gained carrier Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:53.793 [INFO][5716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0 calico-kube-controllers-5f68b6db8c- calico-system bbd39c69-a865-4ebd-9b70-4fae310ae712 1033 0 2025-09-12 17:37:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f68b6db8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-87 calico-kube-controllers-5f68b6db8c-kwzx4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7565a49d9e4 [] [] }} ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:53.793 [INFO][5716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:53.963 [INFO][5743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" HandleID="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:53.964 [INFO][5743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" HandleID="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-87", "pod":"calico-kube-controllers-5f68b6db8c-kwzx4", "timestamp":"2025-09-12 17:37:53.963977877 +0000 UTC"}, Hostname:"ip-172-31-19-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:53.970 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.088 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.088 [INFO][5743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-87' Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.103 [INFO][5743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.122 [INFO][5743] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.147 [INFO][5743] ipam/ipam.go 511: Trying affinity for 192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.162 [INFO][5743] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.174 [INFO][5743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.175 [INFO][5743] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.180 [INFO][5743] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20 Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.199 [INFO][5743] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.242 [INFO][5743] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.51.200/26] block=192.168.51.192/26 handle="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.244 [INFO][5743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.200/26] handle="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" host="ip-172-31-19-87" Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.248 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:54.342337 containerd[1989]: 2025-09-12 17:37:54.249 [INFO][5743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.51.200/26] IPv6=[] ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" HandleID="k8s-pod-network.974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.343604 containerd[1989]: 2025-09-12 17:37:54.280 [INFO][5716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0", GenerateName:"calico-kube-controllers-5f68b6db8c-", Namespace:"calico-system", SelfLink:"", UID:"bbd39c69-a865-4ebd-9b70-4fae310ae712", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68b6db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"", Pod:"calico-kube-controllers-5f68b6db8c-kwzx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7565a49d9e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:54.343604 containerd[1989]: 2025-09-12 17:37:54.281 [INFO][5716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.200/32] ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.343604 containerd[1989]: 2025-09-12 17:37:54.284 [INFO][5716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7565a49d9e4 ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.343604 containerd[1989]: 2025-09-12 17:37:54.302 [INFO][5716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.343604 containerd[1989]: 2025-09-12 17:37:54.303 [INFO][5716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0", GenerateName:"calico-kube-controllers-5f68b6db8c-", Namespace:"calico-system", SelfLink:"", UID:"bbd39c69-a865-4ebd-9b70-4fae310ae712", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68b6db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20", Pod:"calico-kube-controllers-5f68b6db8c-kwzx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7565a49d9e4", MAC:"12:66:07:dd:6a:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:54.343604 containerd[1989]: 2025-09-12 17:37:54.326 [INFO][5716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20" Namespace="calico-system" Pod="calico-kube-controllers-5f68b6db8c-kwzx4" WorkloadEndpoint="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:37:54.363016 systemd[1]: Started cri-containerd-38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96.scope - libcontainer container 38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96. Sep 12 17:37:54.407151 containerd[1989]: time="2025-09-12T17:37:54.403691326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:54.407151 containerd[1989]: time="2025-09-12T17:37:54.406382797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:54.407151 containerd[1989]: time="2025-09-12T17:37:54.406404815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:54.407151 containerd[1989]: time="2025-09-12T17:37:54.406515578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:54.449435 systemd[1]: Started cri-containerd-974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20.scope - libcontainer container 974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20. Sep 12 17:37:54.505501 systemd[1]: run-netns-cni\x2dcb5141ed\x2d6093\x2d929c\x2d2b4c\x2d08d206fa4edc.mount: Deactivated successfully. Sep 12 17:37:54.651052 containerd[1989]: time="2025-09-12T17:37:54.650868161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c76cc97b8-jfrtm,Uid:0d1a0c17-6003-426a-b7b3-9d6d213504ae,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96\"" Sep 12 17:37:54.669835 containerd[1989]: time="2025-09-12T17:37:54.668057566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68b6db8c-kwzx4,Uid:bbd39c69-a865-4ebd-9b70-4fae310ae712,Namespace:calico-system,Attempt:1,} returns sandbox id \"974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20\"" Sep 12 17:37:55.381913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537338422.mount: Deactivated successfully. Sep 12 17:37:55.402583 containerd[1989]: time="2025-09-12T17:37:55.402535499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:55.404123 containerd[1989]: time="2025-09-12T17:37:55.404052460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:37:55.407462 containerd[1989]: time="2025-09-12T17:37:55.406193298Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:55.410087 containerd[1989]: time="2025-09-12T17:37:55.410027935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:55.411378 containerd[1989]: time="2025-09-12T17:37:55.411326955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.656843512s" Sep 12 17:37:55.411514 containerd[1989]: time="2025-09-12T17:37:55.411380606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:37:55.416036 containerd[1989]: time="2025-09-12T17:37:55.415922337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:37:55.417844 containerd[1989]: time="2025-09-12T17:37:55.417735486Z" level=info msg="CreateContainer within sandbox \"ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:37:55.440912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854614916.mount: Deactivated successfully. Sep 12 17:37:55.443464 containerd[1989]: time="2025-09-12T17:37:55.443418573Z" level=info msg="CreateContainer within sandbox \"ca0222abb3989a34c9c15e5ab35b72dc1a7da16554ed5626289f9f7d1e9202ad\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2ab0e0ab8952d90d3e351ee6a94cac4473fe1abfadca2ead68c0294bcb16d94b\"" Sep 12 17:37:55.445255 containerd[1989]: time="2025-09-12T17:37:55.444002851Z" level=info msg="StartContainer for \"2ab0e0ab8952d90d3e351ee6a94cac4473fe1abfadca2ead68c0294bcb16d94b\"" Sep 12 17:37:55.487003 systemd[1]: Started cri-containerd-2ab0e0ab8952d90d3e351ee6a94cac4473fe1abfadca2ead68c0294bcb16d94b.scope - libcontainer container 2ab0e0ab8952d90d3e351ee6a94cac4473fe1abfadca2ead68c0294bcb16d94b. Sep 12 17:37:55.550334 systemd-networkd[1820]: cali13a99d43463: Gained IPv6LL Sep 12 17:37:55.558345 containerd[1989]: time="2025-09-12T17:37:55.558287911Z" level=info msg="StartContainer for \"2ab0e0ab8952d90d3e351ee6a94cac4473fe1abfadca2ead68c0294bcb16d94b\" returns successfully" Sep 12 17:37:56.126137 systemd-networkd[1820]: cali7565a49d9e4: Gained IPv6LL Sep 12 17:37:56.762372 systemd[1]: Started sshd@10-172.31.19.87:22-147.75.109.163:45358.service - OpenSSH per-connection server daemon (147.75.109.163:45358). Sep 12 17:37:57.009302 sshd[5909]: Accepted publickey for core from 147.75.109.163 port 45358 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:37:57.015784 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:57.032594 systemd-logind[1960]: New session 11 of user core. Sep 12 17:37:57.037037 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:37:58.007440 sshd[5909]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:58.018608 systemd[1]: sshd@10-172.31.19.87:22-147.75.109.163:45358.service: Deactivated successfully. Sep 12 17:37:58.026073 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:37:58.031491 systemd-logind[1960]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:37:58.035504 systemd-logind[1960]: Removed session 11. Sep 12 17:37:58.536565 ntpd[1955]: Listen normally on 8 vxlan.calico 192.168.51.192:123 Sep 12 17:37:58.536653 ntpd[1955]: Listen normally on 9 vxlan.calico [fe80::6487:7ff:fe0d:957d%4]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 8 vxlan.calico 192.168.51.192:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 9 vxlan.calico [fe80::6487:7ff:fe0d:957d%4]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 10 calidb57e8c2ce6 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 11 cali0150c16cd66 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 12 cali7a41b00f6e0 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 13 califac71f4df51 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 14 cali6fc6ae3c88a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 15 calicbcc949d133 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 16 cali13a99d43463 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:37:58.538427 ntpd[1955]: 12 Sep 17:37:58 ntpd[1955]: Listen normally on 17 cali7565a49d9e4 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:37:58.536712 ntpd[1955]: Listen normally on 10 calidb57e8c2ce6 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 17:37:58.536756 ntpd[1955]: Listen normally on 11 cali0150c16cd66 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 17:37:58.536885 ntpd[1955]: Listen normally on 12 cali7a41b00f6e0 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 17:37:58.536938 ntpd[1955]: Listen normally on 13 califac71f4df51 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:37:58.536979 ntpd[1955]: Listen normally on 14 cali6fc6ae3c88a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:37:58.537016 ntpd[1955]: Listen normally on 15 calicbcc949d133 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:37:58.537405 ntpd[1955]: Listen normally on 16 cali13a99d43463 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:37:58.537447 ntpd[1955]: Listen normally on 17 cali7565a49d9e4 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:37:58.862708 containerd[1989]: time="2025-09-12T17:37:58.862474638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:58.867728 containerd[1989]: time="2025-09-12T17:37:58.867633681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:37:58.875487 containerd[1989]: time="2025-09-12T17:37:58.875429025Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:58.881750 containerd[1989]: time="2025-09-12T17:37:58.881527134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:58.882767 containerd[1989]: time="2025-09-12T17:37:58.882612716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.466081123s" Sep 12 17:37:58.882767 containerd[1989]: time="2025-09-12T17:37:58.882647923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:37:58.885247 containerd[1989]: time="2025-09-12T17:37:58.883709517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:37:58.889014 containerd[1989]: time="2025-09-12T17:37:58.888958513Z" level=info msg="CreateContainer within sandbox \"63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:37:58.942055 containerd[1989]: time="2025-09-12T17:37:58.942001524Z" level=info msg="CreateContainer within sandbox \"63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7ef12b601420765126a4d94a467f84aefb6b09b50b6e11f016ebec5073feac18\"" Sep 12 17:37:58.947821 containerd[1989]: time="2025-09-12T17:37:58.946953598Z" level=info msg="StartContainer for \"7ef12b601420765126a4d94a467f84aefb6b09b50b6e11f016ebec5073feac18\"" Sep 12 17:37:59.042985 systemd[1]: Started cri-containerd-7ef12b601420765126a4d94a467f84aefb6b09b50b6e11f016ebec5073feac18.scope - libcontainer container 7ef12b601420765126a4d94a467f84aefb6b09b50b6e11f016ebec5073feac18. Sep 12 17:37:59.098686 containerd[1989]: time="2025-09-12T17:37:59.098515704Z" level=info msg="StartContainer for \"7ef12b601420765126a4d94a467f84aefb6b09b50b6e11f016ebec5073feac18\" returns successfully" Sep 12 17:37:59.847347 kubelet[3280]: I0912 17:37:59.847155 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-9897556d5-q2wbn" podStartSLOduration=6.74438682 podStartE2EDuration="13.831288474s" podCreationTimestamp="2025-09-12 17:37:46 +0000 UTC" firstStartedPulling="2025-09-12 17:37:48.327558122 +0000 UTC m=+42.623351738" lastFinishedPulling="2025-09-12 17:37:55.414459776 +0000 UTC m=+49.710253392" observedRunningTime="2025-09-12 17:37:55.75339236 +0000 UTC m=+50.049185981" watchObservedRunningTime="2025-09-12 17:37:59.831288474 +0000 UTC m=+54.127082096" Sep 12 17:37:59.848450 kubelet[3280]: I0912 17:37:59.848236 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c76cc97b8-g6gkc" podStartSLOduration=29.076315363 podStartE2EDuration="37.848217163s" podCreationTimestamp="2025-09-12 17:37:22 +0000 UTC" firstStartedPulling="2025-09-12 17:37:50.111644848 +0000 UTC m=+44.407438463" lastFinishedPulling="2025-09-12 17:37:58.883546609 +0000 UTC m=+53.179340263" observedRunningTime="2025-09-12 17:37:59.847425206 +0000 UTC m=+54.143218811" watchObservedRunningTime="2025-09-12 17:37:59.848217163 +0000 UTC m=+54.144010781" Sep 12 17:38:03.107597 systemd[1]: Started sshd@11-172.31.19.87:22-147.75.109.163:37976.service - OpenSSH per-connection server daemon (147.75.109.163:37976). Sep 12 17:38:03.168559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990800941.mount: Deactivated successfully. Sep 12 17:38:03.760764 sshd[5987]: Accepted publickey for core from 147.75.109.163 port 37976 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:03.772232 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:03.791653 systemd-logind[1960]: New session 12 of user core. Sep 12 17:38:03.798179 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:38:04.899279 containerd[1989]: time="2025-09-12T17:38:04.899222672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:04.902927 containerd[1989]: time="2025-09-12T17:38:04.902856082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:38:04.960176 containerd[1989]: time="2025-09-12T17:38:04.959560894Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:04.989968 containerd[1989]: time="2025-09-12T17:38:04.989911512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:04.991639 containerd[1989]: time="2025-09-12T17:38:04.991449833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.107711821s" Sep 12 17:38:04.991639 containerd[1989]: time="2025-09-12T17:38:04.991506064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:38:05.013110 sshd[5987]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:05.024951 systemd-logind[1960]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:38:05.026277 systemd[1]: sshd@11-172.31.19.87:22-147.75.109.163:37976.service: Deactivated successfully. Sep 12 17:38:05.029915 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:38:05.042102 systemd-logind[1960]: Removed session 12. Sep 12 17:38:05.048470 systemd[1]: Started sshd@12-172.31.19.87:22-147.75.109.163:37992.service - OpenSSH per-connection server daemon (147.75.109.163:37992). Sep 12 17:38:05.189956 containerd[1989]: time="2025-09-12T17:38:05.189424771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:38:05.264437 sshd[6014]: Accepted publickey for core from 147.75.109.163 port 37992 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:05.267977 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:05.276396 systemd-logind[1960]: New session 13 of user core. Sep 12 17:38:05.280139 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:38:05.376840 containerd[1989]: time="2025-09-12T17:38:05.376764252Z" level=info msg="CreateContainer within sandbox \"54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:38:05.494465 containerd[1989]: time="2025-09-12T17:38:05.494341209Z" level=info msg="CreateContainer within sandbox \"54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e\"" Sep 12 17:38:05.528702 containerd[1989]: time="2025-09-12T17:38:05.528575961Z" level=info msg="StartContainer for \"624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e\"" Sep 12 17:38:05.697717 sshd[6014]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:05.707228 systemd-logind[1960]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:38:05.710502 systemd[1]: sshd@12-172.31.19.87:22-147.75.109.163:37992.service: Deactivated successfully. Sep 12 17:38:05.716821 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:38:05.746490 systemd-logind[1960]: Removed session 13. Sep 12 17:38:05.757915 systemd[1]: Started cri-containerd-624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e.scope - libcontainer container 624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e. Sep 12 17:38:05.768188 systemd[1]: Started sshd@13-172.31.19.87:22-147.75.109.163:38000.service - OpenSSH per-connection server daemon (147.75.109.163:38000). Sep 12 17:38:05.900503 containerd[1989]: time="2025-09-12T17:38:05.899413674Z" level=info msg="StartContainer for \"624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e\" returns successfully" Sep 12 17:38:06.013478 sshd[6042]: Accepted publickey for core from 147.75.109.163 port 38000 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:06.019985 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:06.027483 systemd-logind[1960]: New session 14 of user core. Sep 12 17:38:06.034142 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:38:06.589237 sshd[6042]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:06.609649 systemd-logind[1960]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:38:06.611827 systemd[1]: sshd@13-172.31.19.87:22-147.75.109.163:38000.service: Deactivated successfully. Sep 12 17:38:06.616439 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:38:06.618206 systemd-logind[1960]: Removed session 14. Sep 12 17:38:06.724110 containerd[1989]: time="2025-09-12T17:38:06.723255613Z" level=info msg="StopPodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\"" Sep 12 17:38:07.097810 kubelet[3280]: I0912 17:38:07.055294 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-5kcf7" podStartSLOduration=26.720627726 podStartE2EDuration="41.017880134s" podCreationTimestamp="2025-09-12 17:37:26 +0000 UTC" firstStartedPulling="2025-09-12 17:37:50.854701195 +0000 UTC m=+45.150494810" lastFinishedPulling="2025-09-12 17:38:05.151953595 +0000 UTC m=+59.447747218" observedRunningTime="2025-09-12 17:38:06.994894998 +0000 UTC m=+61.290688650" watchObservedRunningTime="2025-09-12 17:38:07.017880134 +0000 UTC m=+61.313673759" Sep 12 17:38:07.805762 systemd[1]: run-containerd-runc-k8s.io-624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e-runc.YvbHcr.mount: Deactivated successfully. Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.380 [WARNING][6102] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f13ccb5e-3497-40e3-9a44-1d13e18105b6", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190", Pod:"goldmane-54d579b49d-5kcf7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califac71f4df51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.385 [INFO][6102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.385 [INFO][6102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" iface="eth0" netns="" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.385 [INFO][6102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.385 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.792 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.802 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.807 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.844 [WARNING][6118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.845 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.848 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:07.854077 containerd[1989]: 2025-09-12 17:38:07.851 [INFO][6102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:07.855464 containerd[1989]: time="2025-09-12T17:38:07.855321721Z" level=info msg="TearDown network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" successfully" Sep 12 17:38:07.855464 containerd[1989]: time="2025-09-12T17:38:07.855359570Z" level=info msg="StopPodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" returns successfully" Sep 12 17:38:07.959805 containerd[1989]: time="2025-09-12T17:38:07.959635007Z" level=info msg="RemovePodSandbox for \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\"" Sep 12 17:38:07.964295 containerd[1989]: time="2025-09-12T17:38:07.964237165Z" level=info msg="Forcibly stopping sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\"" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.026 [WARNING][6151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f13ccb5e-3497-40e3-9a44-1d13e18105b6", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"54648165dd460e86ca63299619a5e50a9815139ea9905ac38be67509a0e87190", Pod:"goldmane-54d579b49d-5kcf7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califac71f4df51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.026 [INFO][6151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.026 [INFO][6151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" iface="eth0" netns="" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.026 [INFO][6151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.026 [INFO][6151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.068 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.068 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.068 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.075 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.075 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" HandleID="k8s-pod-network.b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Workload="ip--172--31--19--87-k8s-goldmane--54d579b49d--5kcf7-eth0" Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.078 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:08.086722 containerd[1989]: 2025-09-12 17:38:08.082 [INFO][6151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc" Sep 12 17:38:08.086722 containerd[1989]: time="2025-09-12T17:38:08.085944003Z" level=info msg="TearDown network for sandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" successfully" Sep 12 17:38:08.102147 containerd[1989]: time="2025-09-12T17:38:08.102090062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:08.113049 containerd[1989]: time="2025-09-12T17:38:08.113001015Z" level=info msg="RemovePodSandbox \"b7dc9f59aed152e0ecfe14da598a74184020fe09e104a793e672a65199a264bc\" returns successfully" Sep 12 17:38:08.125149 containerd[1989]: time="2025-09-12T17:38:08.125110236Z" level=info msg="StopPodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\"" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.173 [WARNING][6176] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0", GenerateName:"calico-kube-controllers-5f68b6db8c-", Namespace:"calico-system", SelfLink:"", UID:"bbd39c69-a865-4ebd-9b70-4fae310ae712", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68b6db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20", Pod:"calico-kube-controllers-5f68b6db8c-kwzx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7565a49d9e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.173 [INFO][6176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.173 [INFO][6176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" iface="eth0" netns="" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.173 [INFO][6176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.173 [INFO][6176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.212 [INFO][6184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.213 [INFO][6184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.213 [INFO][6184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.229 [WARNING][6184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.230 [INFO][6184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.234 [INFO][6184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:08.240779 containerd[1989]: 2025-09-12 17:38:08.237 [INFO][6176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.248149 containerd[1989]: time="2025-09-12T17:38:08.240854356Z" level=info msg="TearDown network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" successfully" Sep 12 17:38:08.248149 containerd[1989]: time="2025-09-12T17:38:08.240917794Z" level=info msg="StopPodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" returns successfully" Sep 12 17:38:08.249811 containerd[1989]: time="2025-09-12T17:38:08.249490866Z" level=info msg="RemovePodSandbox for \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\"" Sep 12 17:38:08.249811 containerd[1989]: time="2025-09-12T17:38:08.249667105Z" level=info msg="Forcibly stopping sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\"" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.356 [WARNING][6208] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0", GenerateName:"calico-kube-controllers-5f68b6db8c-", Namespace:"calico-system", SelfLink:"", UID:"bbd39c69-a865-4ebd-9b70-4fae310ae712", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68b6db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20", Pod:"calico-kube-controllers-5f68b6db8c-kwzx4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7565a49d9e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.357 [INFO][6208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.357 [INFO][6208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" iface="eth0" netns="" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.357 [INFO][6208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.357 [INFO][6208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.418 [INFO][6215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.421 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.421 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.434 [WARNING][6215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.434 [INFO][6215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" HandleID="k8s-pod-network.5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Workload="ip--172--31--19--87-k8s-calico--kube--controllers--5f68b6db8c--kwzx4-eth0" Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.439 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:08.458878 containerd[1989]: 2025-09-12 17:38:08.448 [INFO][6208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7" Sep 12 17:38:08.463328 containerd[1989]: time="2025-09-12T17:38:08.459977533Z" level=info msg="TearDown network for sandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" successfully" Sep 12 17:38:08.482765 containerd[1989]: time="2025-09-12T17:38:08.482716396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:08.482916 containerd[1989]: time="2025-09-12T17:38:08.482831509Z" level=info msg="RemovePodSandbox \"5cc60936d7ba81c6b66a8844a4932d5f97cf6b5adef2bd95bef8f18d612f25c7\" returns successfully" Sep 12 17:38:08.485694 containerd[1989]: time="2025-09-12T17:38:08.485654728Z" level=info msg="StopPodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\"" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.566 [WARNING][6229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58", Pod:"coredns-668d6bf9bc-cbsml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbcc949d133", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.566 [INFO][6229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.567 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" iface="eth0" netns="" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.567 [INFO][6229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.567 [INFO][6229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.606 [INFO][6236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.607 [INFO][6236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.607 [INFO][6236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.619 [WARNING][6236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.619 [INFO][6236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.624 [INFO][6236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:08.633933 containerd[1989]: 2025-09-12 17:38:08.629 [INFO][6229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.637126 containerd[1989]: time="2025-09-12T17:38:08.633983172Z" level=info msg="TearDown network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" successfully" Sep 12 17:38:08.637126 containerd[1989]: time="2025-09-12T17:38:08.634014891Z" level=info msg="StopPodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" returns successfully" Sep 12 17:38:08.642528 containerd[1989]: time="2025-09-12T17:38:08.642133461Z" level=info msg="RemovePodSandbox for \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\"" Sep 12 17:38:08.642528 containerd[1989]: time="2025-09-12T17:38:08.642180317Z" level=info msg="Forcibly stopping sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\"" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.708 [WARNING][6250] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e120bb6-e43d-4b4a-912f-d1447d2d9f1e", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"945d6e12a342aeda3e7726b93e893a44cf77e40bc14edb11510eb22cf8385c58", Pod:"coredns-668d6bf9bc-cbsml", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbcc949d133", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.709 [INFO][6250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.709 [INFO][6250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" iface="eth0" netns="" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.709 [INFO][6250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.709 [INFO][6250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.756 [INFO][6258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.756 [INFO][6258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.756 [INFO][6258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.773 [WARNING][6258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.774 [INFO][6258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" HandleID="k8s-pod-network.61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--cbsml-eth0" Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.778 [INFO][6258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:08.792281 containerd[1989]: 2025-09-12 17:38:08.784 [INFO][6250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a" Sep 12 17:38:08.793643 containerd[1989]: time="2025-09-12T17:38:08.792339528Z" level=info msg="TearDown network for sandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" successfully" Sep 12 17:38:08.802406 containerd[1989]: time="2025-09-12T17:38:08.801857446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:08.802406 containerd[1989]: time="2025-09-12T17:38:08.801938575Z" level=info msg="RemovePodSandbox \"61c328707d77fff45a92ab8c6a82174551f68874e7bed39ee66c5912ec7ac54a\" returns successfully" Sep 12 17:38:08.804489 containerd[1989]: time="2025-09-12T17:38:08.803972512Z" level=info msg="StopPodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\"" Sep 12 17:38:08.818253 containerd[1989]: time="2025-09-12T17:38:08.818207447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:08.819808 containerd[1989]: time="2025-09-12T17:38:08.819748686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:38:08.822368 containerd[1989]: time="2025-09-12T17:38:08.822329802Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:08.827745 containerd[1989]: time="2025-09-12T17:38:08.827710812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:08.831581 containerd[1989]: time="2025-09-12T17:38:08.828506625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.639033369s" Sep 12 17:38:08.831581 containerd[1989]: time="2025-09-12T17:38:08.831573187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:38:08.846673 containerd[1989]: time="2025-09-12T17:38:08.846625994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:38:08.857473 containerd[1989]: time="2025-09-12T17:38:08.857411355Z" level=info msg="CreateContainer within sandbox \"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:38:08.966268 containerd[1989]: time="2025-09-12T17:38:08.960519036Z" level=info msg="CreateContainer within sandbox \"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4d94780e297e74cd97c47d799c50e040355fad647b0d9f606c2635308cd7df58\"" Sep 12 17:38:08.966268 containerd[1989]: time="2025-09-12T17:38:08.963987589Z" level=info msg="StartContainer for \"4d94780e297e74cd97c47d799c50e040355fad647b0d9f606c2635308cd7df58\"" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:08.938 [WARNING][6272] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:08.938 [INFO][6272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:08.938 [INFO][6272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" iface="eth0" netns="" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:08.938 [INFO][6272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:08.938 [INFO][6272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.038 [INFO][6298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.039 [INFO][6298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.039 [INFO][6298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.048 [WARNING][6298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.048 [INFO][6298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.053 [INFO][6298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:09.073187 containerd[1989]: 2025-09-12 17:38:09.060 [INFO][6272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.078234 containerd[1989]: time="2025-09-12T17:38:09.073702297Z" level=info msg="TearDown network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" successfully" Sep 12 17:38:09.078234 containerd[1989]: time="2025-09-12T17:38:09.073737229Z" level=info msg="StopPodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" returns successfully" Sep 12 17:38:09.078234 containerd[1989]: time="2025-09-12T17:38:09.075089358Z" level=info msg="RemovePodSandbox for \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\"" Sep 12 17:38:09.078234 containerd[1989]: time="2025-09-12T17:38:09.075125216Z" level=info msg="Forcibly stopping sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\"" Sep 12 17:38:09.124449 systemd[1]: Started cri-containerd-4d94780e297e74cd97c47d799c50e040355fad647b0d9f606c2635308cd7df58.scope - libcontainer container 4d94780e297e74cd97c47d799c50e040355fad647b0d9f606c2635308cd7df58. Sep 12 17:38:09.217812 containerd[1989]: time="2025-09-12T17:38:09.217050832Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:09.230069 containerd[1989]: time="2025-09-12T17:38:09.229726903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:38:09.232311 containerd[1989]: time="2025-09-12T17:38:09.232118788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 385.453383ms" Sep 12 17:38:09.232311 containerd[1989]: time="2025-09-12T17:38:09.232173129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:38:09.235121 containerd[1989]: time="2025-09-12T17:38:09.234647757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:38:09.239165 containerd[1989]: time="2025-09-12T17:38:09.238386490Z" level=info msg="CreateContainer within sandbox \"38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:38:09.295444 containerd[1989]: time="2025-09-12T17:38:09.295396594Z" level=info msg="CreateContainer within sandbox \"38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c1cf3fe0f83ad968013e26ef3a4003c21b9b439f227b9540e372ede01430aa3a\"" Sep 12 17:38:09.300315 containerd[1989]: time="2025-09-12T17:38:09.300277498Z" level=info msg="StartContainer for \"c1cf3fe0f83ad968013e26ef3a4003c21b9b439f227b9540e372ede01430aa3a\"" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.216 [WARNING][6332] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" WorkloadEndpoint="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.217 [INFO][6332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.217 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" iface="eth0" netns="" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.217 [INFO][6332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.217 [INFO][6332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.317 [INFO][6349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.318 [INFO][6349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.318 [INFO][6349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.347 [WARNING][6349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.347 [INFO][6349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" HandleID="k8s-pod-network.d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Workload="ip--172--31--19--87-k8s-whisker--779877b74--ns76l-eth0" Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.391 [INFO][6349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:09.413933 containerd[1989]: 2025-09-12 17:38:09.400 [INFO][6332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf" Sep 12 17:38:09.413933 containerd[1989]: time="2025-09-12T17:38:09.410005342Z" level=info msg="TearDown network for sandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" successfully" Sep 12 17:38:09.428039 containerd[1989]: time="2025-09-12T17:38:09.427954269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:09.428268 containerd[1989]: time="2025-09-12T17:38:09.428243750Z" level=info msg="RemovePodSandbox \"d9a562367f43b9e1d54bb432fcee06935db33ff263d3b1629afea456c51153cf\" returns successfully" Sep 12 17:38:09.428960 containerd[1989]: time="2025-09-12T17:38:09.428929753Z" level=info msg="StopPodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\"" Sep 12 17:38:09.457057 systemd[1]: Started cri-containerd-c1cf3fe0f83ad968013e26ef3a4003c21b9b439f227b9540e372ede01430aa3a.scope - libcontainer container c1cf3fe0f83ad968013e26ef3a4003c21b9b439f227b9540e372ede01430aa3a. Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.536 [WARNING][6383] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47dd082d-8313-4b06-a25a-46c1ffeb1afd", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8", Pod:"csi-node-driver-hnq78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0150c16cd66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.536 [INFO][6383] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.536 [INFO][6383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" iface="eth0" netns="" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.536 [INFO][6383] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.536 [INFO][6383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.601 [INFO][6390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.601 [INFO][6390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.601 [INFO][6390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.609 [WARNING][6390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.609 [INFO][6390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.613 [INFO][6390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:09.633027 containerd[1989]: 2025-09-12 17:38:09.623 [INFO][6383] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.633901 containerd[1989]: time="2025-09-12T17:38:09.633064744Z" level=info msg="TearDown network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" successfully" Sep 12 17:38:09.633901 containerd[1989]: time="2025-09-12T17:38:09.633094588Z" level=info msg="StopPodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" returns successfully" Sep 12 17:38:09.639781 containerd[1989]: time="2025-09-12T17:38:09.639734957Z" level=info msg="RemovePodSandbox for \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\"" Sep 12 17:38:09.640250 containerd[1989]: time="2025-09-12T17:38:09.639778458Z" level=info msg="Forcibly stopping sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\"" Sep 12 17:38:09.753065 containerd[1989]: time="2025-09-12T17:38:09.751297312Z" level=info msg="StartContainer for \"4d94780e297e74cd97c47d799c50e040355fad647b0d9f606c2635308cd7df58\" returns successfully" Sep 12 17:38:09.912523 containerd[1989]: time="2025-09-12T17:38:09.912434051Z" level=info msg="StartContainer for \"c1cf3fe0f83ad968013e26ef3a4003c21b9b439f227b9540e372ede01430aa3a\" returns successfully" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.835 [WARNING][6413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47dd082d-8313-4b06-a25a-46c1ffeb1afd", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"a846bd659a31e56febe23e424194a8e33b7b5e88164f0031112cdc7133b9bfd8", Pod:"csi-node-driver-hnq78", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0150c16cd66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.836 [INFO][6413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.836 [INFO][6413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" iface="eth0" netns="" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.836 [INFO][6413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.836 [INFO][6413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.947 [INFO][6430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.950 [INFO][6430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.950 [INFO][6430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.964 [WARNING][6430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.964 [INFO][6430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" HandleID="k8s-pod-network.d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Workload="ip--172--31--19--87-k8s-csi--node--driver--hnq78-eth0" Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.971 [INFO][6430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:09.984295 containerd[1989]: 2025-09-12 17:38:09.975 [INFO][6413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859" Sep 12 17:38:09.987728 containerd[1989]: time="2025-09-12T17:38:09.984341006Z" level=info msg="TearDown network for sandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" successfully" Sep 12 17:38:10.000940 containerd[1989]: time="2025-09-12T17:38:10.000890828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:10.002718 containerd[1989]: time="2025-09-12T17:38:10.001309190Z" level=info msg="RemovePodSandbox \"d947f36ba4605ac76c45fa92f3ea119f7ed17ded7fc5d26db5c4015e63c1b859\" returns successfully" Sep 12 17:38:10.025265 containerd[1989]: time="2025-09-12T17:38:10.024806779Z" level=info msg="StopPodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\"" Sep 12 17:38:10.196206 kubelet[3280]: I0912 17:38:10.190999 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c76cc97b8-jfrtm" podStartSLOduration=33.617656499 podStartE2EDuration="48.190976887s" podCreationTimestamp="2025-09-12 17:37:22 +0000 UTC" firstStartedPulling="2025-09-12 17:37:54.661067184 +0000 UTC m=+48.956860799" lastFinishedPulling="2025-09-12 17:38:09.234387566 +0000 UTC m=+63.530181187" observedRunningTime="2025-09-12 17:38:10.190293842 +0000 UTC m=+64.486087467" watchObservedRunningTime="2025-09-12 17:38:10.190976887 +0000 UTC m=+64.486770510" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.124 [WARNING][6457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49d7f36-b7fa-455a-b787-1fea47393279", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844", Pod:"calico-apiserver-7c76cc97b8-g6gkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a41b00f6e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.124 [INFO][6457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.124 [INFO][6457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" iface="eth0" netns="" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.124 [INFO][6457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.125 [INFO][6457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.210 [INFO][6464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.212 [INFO][6464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.213 [INFO][6464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.262 [WARNING][6464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.262 [INFO][6464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.268 [INFO][6464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.280570 containerd[1989]: 2025-09-12 17:38:10.276 [INFO][6457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.280570 containerd[1989]: time="2025-09-12T17:38:10.280472102Z" level=info msg="TearDown network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" successfully" Sep 12 17:38:10.280570 containerd[1989]: time="2025-09-12T17:38:10.280503638Z" level=info msg="StopPodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" returns successfully" Sep 12 17:38:10.283564 containerd[1989]: time="2025-09-12T17:38:10.282982590Z" level=info msg="RemovePodSandbox for \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\"" Sep 12 17:38:10.283564 containerd[1989]: time="2025-09-12T17:38:10.283129151Z" level=info msg="Forcibly stopping sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\"" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.358 [WARNING][6484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e49d7f36-b7fa-455a-b787-1fea47393279", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"63fdde1736e0e207ec9de300b56faac48a9c211efbc34ae47ab06d232ba2d844", Pod:"calico-apiserver-7c76cc97b8-g6gkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a41b00f6e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.359 [INFO][6484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.359 [INFO][6484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" iface="eth0" netns="" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.359 [INFO][6484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.359 [INFO][6484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.395 [INFO][6493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.396 [INFO][6493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.396 [INFO][6493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.405 [WARNING][6493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.405 [INFO][6493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" HandleID="k8s-pod-network.98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--g6gkc-eth0" Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.408 [INFO][6493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.413842 containerd[1989]: 2025-09-12 17:38:10.410 [INFO][6484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4" Sep 12 17:38:10.415964 containerd[1989]: time="2025-09-12T17:38:10.413885512Z" level=info msg="TearDown network for sandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" successfully" Sep 12 17:38:10.422993 containerd[1989]: time="2025-09-12T17:38:10.422941296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:10.423141 containerd[1989]: time="2025-09-12T17:38:10.423035401Z" level=info msg="RemovePodSandbox \"98952a2bdd77f0c9ae213be788367280bb1b51ee0eea6069307d6ababf1163d4\" returns successfully" Sep 12 17:38:10.423597 containerd[1989]: time="2025-09-12T17:38:10.423567628Z" level=info msg="StopPodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\"" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.495 [WARNING][6508] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8306a462-dd92-4fcd-bcfb-5dc368adabea", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072", Pod:"coredns-668d6bf9bc-dr46r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc6ae3c88a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.496 [INFO][6508] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.496 [INFO][6508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" iface="eth0" netns="" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.496 [INFO][6508] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.496 [INFO][6508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.537 [INFO][6515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.538 [INFO][6515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.538 [INFO][6515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.546 [WARNING][6515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.546 [INFO][6515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.548 [INFO][6515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.553913 containerd[1989]: 2025-09-12 17:38:10.551 [INFO][6508] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.554998 containerd[1989]: time="2025-09-12T17:38:10.554953771Z" level=info msg="TearDown network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" successfully" Sep 12 17:38:10.555075 containerd[1989]: time="2025-09-12T17:38:10.554998877Z" level=info msg="StopPodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" returns successfully" Sep 12 17:38:10.556030 containerd[1989]: time="2025-09-12T17:38:10.555996383Z" level=info msg="RemovePodSandbox for \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\"" Sep 12 17:38:10.556128 containerd[1989]: time="2025-09-12T17:38:10.556040650Z" level=info msg="Forcibly stopping sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\"" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.641 [WARNING][6529] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8306a462-dd92-4fcd-bcfb-5dc368adabea", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"98c8413cbcd332640f9e7756cf2fd020f498131ffe1a1783c15798c76da27072", Pod:"coredns-668d6bf9bc-dr46r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc6ae3c88a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.641 [INFO][6529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.641 [INFO][6529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" iface="eth0" netns="" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.641 [INFO][6529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.641 [INFO][6529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.823 [INFO][6542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.823 [INFO][6542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.823 [INFO][6542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.853 [WARNING][6542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.853 [INFO][6542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" HandleID="k8s-pod-network.c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Workload="ip--172--31--19--87-k8s-coredns--668d6bf9bc--dr46r-eth0" Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.866 [INFO][6542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.886928 containerd[1989]: 2025-09-12 17:38:10.875 [INFO][6529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34" Sep 12 17:38:10.886928 containerd[1989]: time="2025-09-12T17:38:10.885497654Z" level=info msg="TearDown network for sandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" successfully" Sep 12 17:38:10.907204 containerd[1989]: time="2025-09-12T17:38:10.907155582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:10.907657 containerd[1989]: time="2025-09-12T17:38:10.907534349Z" level=info msg="RemovePodSandbox \"c0589ac450ab592e46a27067a58e5650ce85234fbe6257ab7e92d96ac0e4ce34\" returns successfully" Sep 12 17:38:10.912211 containerd[1989]: time="2025-09-12T17:38:10.911291964Z" level=info msg="StopPodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\"" Sep 12 17:38:11.056773 kubelet[3280]: I0912 17:38:11.049247 3280 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:38:11.100581 kubelet[3280]: I0912 17:38:11.100542 3280 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.100 [WARNING][6566] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d1a0c17-6003-426a-b7b3-9d6d213504ae", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96", Pod:"calico-apiserver-7c76cc97b8-jfrtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13a99d43463", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.102 [INFO][6566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.102 [INFO][6566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" iface="eth0" netns="" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.102 [INFO][6566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.102 [INFO][6566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.228 [INFO][6573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.231 [INFO][6573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.231 [INFO][6573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.250 [WARNING][6573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.251 [INFO][6573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.259 [INFO][6573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.269894 containerd[1989]: 2025-09-12 17:38:11.264 [INFO][6566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.269894 containerd[1989]: time="2025-09-12T17:38:11.269845100Z" level=info msg="TearDown network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" successfully" Sep 12 17:38:11.269894 containerd[1989]: time="2025-09-12T17:38:11.269875359Z" level=info msg="StopPodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" returns successfully" Sep 12 17:38:11.273452 containerd[1989]: time="2025-09-12T17:38:11.273065738Z" level=info msg="RemovePodSandbox for \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\"" Sep 12 17:38:11.273452 containerd[1989]: time="2025-09-12T17:38:11.273111473Z" level=info msg="Forcibly stopping sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\"" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.399 [WARNING][6592] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0", GenerateName:"calico-apiserver-7c76cc97b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d1a0c17-6003-426a-b7b3-9d6d213504ae", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c76cc97b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-87", ContainerID:"38eb1141f6c17b639eaca23f31f2139dcf3c9ff82af61123f97042941f257a96", Pod:"calico-apiserver-7c76cc97b8-jfrtm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13a99d43463", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.400 [INFO][6592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.400 [INFO][6592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" iface="eth0" netns="" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.400 [INFO][6592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.401 [INFO][6592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.472 [INFO][6599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.472 [INFO][6599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.472 [INFO][6599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.484 [WARNING][6599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.484 [INFO][6599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" HandleID="k8s-pod-network.c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Workload="ip--172--31--19--87-k8s-calico--apiserver--7c76cc97b8--jfrtm-eth0" Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.496 [INFO][6599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.503880 containerd[1989]: 2025-09-12 17:38:11.500 [INFO][6592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4" Sep 12 17:38:11.505203 containerd[1989]: time="2025-09-12T17:38:11.503999268Z" level=info msg="TearDown network for sandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" successfully" Sep 12 17:38:11.512414 containerd[1989]: time="2025-09-12T17:38:11.512367355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:11.517970 containerd[1989]: time="2025-09-12T17:38:11.517855393Z" level=info msg="RemovePodSandbox \"c9060b23ddf8375e932cd006b6cae8f44704f7c4c4d269e555f917fcacff4de4\" returns successfully" Sep 12 17:38:11.622371 systemd[1]: Started sshd@14-172.31.19.87:22-147.75.109.163:35534.service - OpenSSH per-connection server daemon (147.75.109.163:35534). Sep 12 17:38:11.910997 sshd[6606]: Accepted publickey for core from 147.75.109.163 port 35534 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:11.914325 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:11.923760 systemd-logind[1960]: New session 15 of user core. Sep 12 17:38:11.931062 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:38:12.401990 kubelet[3280]: I0912 17:38:12.401783 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hnq78" podStartSLOduration=27.531404168999998 podStartE2EDuration="46.401759786s" podCreationTimestamp="2025-09-12 17:37:26 +0000 UTC" firstStartedPulling="2025-09-12 17:37:49.975701278 +0000 UTC m=+44.271494888" lastFinishedPulling="2025-09-12 17:38:08.846056885 +0000 UTC m=+63.141850505" observedRunningTime="2025-09-12 17:38:10.234576809 +0000 UTC m=+64.530370433" watchObservedRunningTime="2025-09-12 17:38:12.401759786 +0000 UTC m=+66.697553410" Sep 12 17:38:13.923154 sshd[6606]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:13.931060 systemd[1]: sshd@14-172.31.19.87:22-147.75.109.163:35534.service: Deactivated successfully. Sep 12 17:38:13.934812 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:38:13.937651 systemd-logind[1960]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:38:13.945301 systemd-logind[1960]: Removed session 15. Sep 12 17:38:15.827684 containerd[1989]: time="2025-09-12T17:38:15.827629323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:15.963449 containerd[1989]: time="2025-09-12T17:38:15.843857059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:38:15.993625 containerd[1989]: time="2025-09-12T17:38:15.993574694Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:15.997487 containerd[1989]: time="2025-09-12T17:38:15.997416980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:15.998768 containerd[1989]: time="2025-09-12T17:38:15.998353183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.763665691s" Sep 12 17:38:15.998768 containerd[1989]: time="2025-09-12T17:38:15.998396836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:38:16.268686 containerd[1989]: time="2025-09-12T17:38:16.267285455Z" level=info msg="CreateContainer within sandbox \"974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:38:16.463823 containerd[1989]: time="2025-09-12T17:38:16.463746355Z" level=info msg="CreateContainer within sandbox \"974d59012005103dc2f205bf6f75cb860bf30c75571880570de11ef88d54fc20\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d7860e04976201bb6666980d8e1c4fa48d4fa72ee3178c6a11a9e2da6faccf28\"" Sep 12 17:38:16.464962 containerd[1989]: time="2025-09-12T17:38:16.464295599Z" level=info msg="StartContainer for \"d7860e04976201bb6666980d8e1c4fa48d4fa72ee3178c6a11a9e2da6faccf28\"" Sep 12 17:38:16.842125 systemd[1]: Started cri-containerd-d7860e04976201bb6666980d8e1c4fa48d4fa72ee3178c6a11a9e2da6faccf28.scope - libcontainer container d7860e04976201bb6666980d8e1c4fa48d4fa72ee3178c6a11a9e2da6faccf28. Sep 12 17:38:16.918101 containerd[1989]: time="2025-09-12T17:38:16.918062563Z" level=info msg="StartContainer for \"d7860e04976201bb6666980d8e1c4fa48d4fa72ee3178c6a11a9e2da6faccf28\" returns successfully" Sep 12 17:38:17.672209 systemd[1]: run-containerd-runc-k8s.io-cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d-runc.bFey3Z.mount: Deactivated successfully. Sep 12 17:38:17.756246 kubelet[3280]: I0912 17:38:17.747377 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f68b6db8c-kwzx4" podStartSLOduration=30.331874072 podStartE2EDuration="51.711404914s" podCreationTimestamp="2025-09-12 17:37:26 +0000 UTC" firstStartedPulling="2025-09-12 17:37:54.672236622 +0000 UTC m=+48.968030236" lastFinishedPulling="2025-09-12 17:38:16.051767474 +0000 UTC m=+70.347561078" observedRunningTime="2025-09-12 17:38:17.683273123 +0000 UTC m=+71.979066753" watchObservedRunningTime="2025-09-12 17:38:17.711404914 +0000 UTC m=+72.007198561" Sep 12 17:38:18.971161 systemd[1]: Started sshd@15-172.31.19.87:22-147.75.109.163:35542.service - OpenSSH per-connection server daemon (147.75.109.163:35542). Sep 12 17:38:19.212679 sshd[6751]: Accepted publickey for core from 147.75.109.163 port 35542 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:19.215990 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:19.224019 systemd-logind[1960]: New session 16 of user core. Sep 12 17:38:19.226304 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:38:20.310522 sshd[6751]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:20.318526 systemd-logind[1960]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:38:20.318765 systemd[1]: sshd@15-172.31.19.87:22-147.75.109.163:35542.service: Deactivated successfully. Sep 12 17:38:20.321641 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:38:20.323514 systemd-logind[1960]: Removed session 16. Sep 12 17:38:25.343821 systemd[1]: Started sshd@16-172.31.19.87:22-147.75.109.163:50866.service - OpenSSH per-connection server daemon (147.75.109.163:50866). Sep 12 17:38:25.621967 sshd[6768]: Accepted publickey for core from 147.75.109.163 port 50866 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:25.623867 sshd[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:25.636672 systemd-logind[1960]: New session 17 of user core. Sep 12 17:38:25.640005 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:38:26.588688 sshd[6768]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:26.594295 systemd-logind[1960]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:38:26.594478 systemd[1]: sshd@16-172.31.19.87:22-147.75.109.163:50866.service: Deactivated successfully. Sep 12 17:38:26.597528 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:38:26.600107 systemd-logind[1960]: Removed session 17. Sep 12 17:38:26.628209 systemd[1]: Started sshd@17-172.31.19.87:22-147.75.109.163:50868.service - OpenSSH per-connection server daemon (147.75.109.163:50868). Sep 12 17:38:26.825152 sshd[6782]: Accepted publickey for core from 147.75.109.163 port 50868 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:26.827110 sshd[6782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:26.833616 systemd-logind[1960]: New session 18 of user core. Sep 12 17:38:26.840998 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:38:27.584625 sshd[6782]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:27.594388 systemd[1]: sshd@17-172.31.19.87:22-147.75.109.163:50868.service: Deactivated successfully. Sep 12 17:38:27.598240 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:38:27.601172 systemd-logind[1960]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:38:27.620818 systemd[1]: Started sshd@18-172.31.19.87:22-147.75.109.163:50878.service - OpenSSH per-connection server daemon (147.75.109.163:50878). Sep 12 17:38:27.624540 systemd-logind[1960]: Removed session 18. Sep 12 17:38:27.868450 sshd[6793]: Accepted publickey for core from 147.75.109.163 port 50878 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:27.871161 sshd[6793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:27.881894 systemd-logind[1960]: New session 19 of user core. Sep 12 17:38:27.892354 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:38:28.886882 sshd[6793]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:28.899106 systemd[1]: sshd@18-172.31.19.87:22-147.75.109.163:50878.service: Deactivated successfully. Sep 12 17:38:28.903983 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:38:28.906706 systemd-logind[1960]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:38:28.932297 systemd[1]: Started sshd@19-172.31.19.87:22-147.75.109.163:50888.service - OpenSSH per-connection server daemon (147.75.109.163:50888). Sep 12 17:38:28.936996 systemd-logind[1960]: Removed session 19. Sep 12 17:38:29.151386 sshd[6817]: Accepted publickey for core from 147.75.109.163 port 50888 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:29.154231 sshd[6817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:29.167864 systemd-logind[1960]: New session 20 of user core. Sep 12 17:38:29.174847 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:38:30.382079 sshd[6817]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:30.388488 systemd[1]: sshd@19-172.31.19.87:22-147.75.109.163:50888.service: Deactivated successfully. Sep 12 17:38:30.395945 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:38:30.398997 systemd-logind[1960]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:38:30.415319 systemd-logind[1960]: Removed session 20. Sep 12 17:38:30.425366 systemd[1]: Started sshd@20-172.31.19.87:22-147.75.109.163:34546.service - OpenSSH per-connection server daemon (147.75.109.163:34546). Sep 12 17:38:30.648196 sshd[6831]: Accepted publickey for core from 147.75.109.163 port 34546 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:30.650019 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:30.656458 systemd-logind[1960]: New session 21 of user core. Sep 12 17:38:30.660114 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:38:30.879737 sshd[6831]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:30.883548 systemd-logind[1960]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:38:30.884428 systemd[1]: sshd@20-172.31.19.87:22-147.75.109.163:34546.service: Deactivated successfully. Sep 12 17:38:30.886749 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:38:30.888461 systemd-logind[1960]: Removed session 21. Sep 12 17:38:35.913726 systemd[1]: Started sshd@21-172.31.19.87:22-147.75.109.163:34556.service - OpenSSH per-connection server daemon (147.75.109.163:34556). Sep 12 17:38:36.131867 sshd[6848]: Accepted publickey for core from 147.75.109.163 port 34556 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:36.134904 sshd[6848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:36.141414 systemd-logind[1960]: New session 22 of user core. Sep 12 17:38:36.147166 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:38:36.599956 sshd[6848]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:36.604619 systemd[1]: sshd@21-172.31.19.87:22-147.75.109.163:34556.service: Deactivated successfully. Sep 12 17:38:36.606948 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:38:36.608131 systemd-logind[1960]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:38:36.609291 systemd-logind[1960]: Removed session 22. Sep 12 17:38:39.166223 systemd[1]: run-containerd-runc-k8s.io-624ba09a4f33ad514dc66fe9be292c9724cc7b3f9c9e5281fc3c671041416f8e-runc.UojVbW.mount: Deactivated successfully. Sep 12 17:38:41.639083 systemd[1]: Started sshd@22-172.31.19.87:22-147.75.109.163:52072.service - OpenSSH per-connection server daemon (147.75.109.163:52072). Sep 12 17:38:41.894527 sshd[6883]: Accepted publickey for core from 147.75.109.163 port 52072 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:41.897643 sshd[6883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:41.905126 systemd-logind[1960]: New session 23 of user core. Sep 12 17:38:41.909111 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:38:42.861012 sshd[6883]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:42.867370 systemd[1]: sshd@22-172.31.19.87:22-147.75.109.163:52072.service: Deactivated successfully. Sep 12 17:38:42.871276 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:38:42.874968 systemd-logind[1960]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:38:42.877700 systemd-logind[1960]: Removed session 23. Sep 12 17:38:47.640125 systemd[1]: run-containerd-runc-k8s.io-cae050fcbbc66ee7a73d8a16f6480ecc92e87e7c7d11114304ae9e2a64750f2d-runc.IgEqUJ.mount: Deactivated successfully. Sep 12 17:38:47.901617 systemd[1]: Started sshd@23-172.31.19.87:22-147.75.109.163:52080.service - OpenSSH per-connection server daemon (147.75.109.163:52080). Sep 12 17:38:48.200449 sshd[6959]: Accepted publickey for core from 147.75.109.163 port 52080 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:48.205918 sshd[6959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:48.219781 systemd-logind[1960]: New session 24 of user core. Sep 12 17:38:48.226583 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:38:49.413137 sshd[6959]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:49.419659 systemd[1]: sshd@23-172.31.19.87:22-147.75.109.163:52080.service: Deactivated successfully. Sep 12 17:38:49.423991 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:38:49.426047 systemd-logind[1960]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:38:49.430414 systemd-logind[1960]: Removed session 24. Sep 12 17:38:54.452192 systemd[1]: Started sshd@24-172.31.19.87:22-147.75.109.163:38926.service - OpenSSH per-connection server daemon (147.75.109.163:38926). Sep 12 17:38:54.650746 sshd[6974]: Accepted publickey for core from 147.75.109.163 port 38926 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:38:54.653076 sshd[6974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:54.659199 systemd-logind[1960]: New session 25 of user core. Sep 12 17:38:54.663042 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:38:55.064172 sshd[6974]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:55.070631 systemd[1]: sshd@24-172.31.19.87:22-147.75.109.163:38926.service: Deactivated successfully. Sep 12 17:38:55.076288 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:38:55.077662 systemd-logind[1960]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:38:55.079861 systemd-logind[1960]: Removed session 25. Sep 12 17:39:00.100309 systemd[1]: Started sshd@25-172.31.19.87:22-147.75.109.163:37384.service - OpenSSH per-connection server daemon (147.75.109.163:37384). Sep 12 17:39:00.314961 sshd[6988]: Accepted publickey for core from 147.75.109.163 port 37384 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:39:00.317056 sshd[6988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:00.326518 systemd-logind[1960]: New session 26 of user core. Sep 12 17:39:00.333225 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:39:00.694622 sshd[6988]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:00.700419 systemd[1]: sshd@25-172.31.19.87:22-147.75.109.163:37384.service: Deactivated successfully. Sep 12 17:39:00.704513 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:39:00.709650 systemd-logind[1960]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:39:00.713754 systemd-logind[1960]: Removed session 26. Sep 12 17:39:05.739221 systemd[1]: Started sshd@26-172.31.19.87:22-147.75.109.163:37390.service - OpenSSH per-connection server daemon (147.75.109.163:37390). Sep 12 17:39:06.037115 sshd[7001]: Accepted publickey for core from 147.75.109.163 port 37390 ssh2: RSA SHA256:Zk+yQ/wmdhX/Ffv+CE8eokhEY8fdLmZUMms7p7aw/dk Sep 12 17:39:06.039694 sshd[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:06.057442 systemd-logind[1960]: New session 27 of user core. Sep 12 17:39:06.061084 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:39:07.808772 sshd[7001]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:07.817051 systemd[1]: sshd@26-172.31.19.87:22-147.75.109.163:37390.service: Deactivated successfully. Sep 12 17:39:07.824282 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:39:07.828485 systemd-logind[1960]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:39:07.829886 systemd-logind[1960]: Removed session 27. Sep 12 17:39:22.613900 systemd[1]: cri-containerd-2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea.scope: Deactivated successfully. Sep 12 17:39:22.614713 systemd[1]: cri-containerd-2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea.scope: Consumed 4.909s CPU time, 34.2M memory peak, 0B memory swap peak. Sep 12 17:39:22.664380 systemd[1]: cri-containerd-323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9.scope: Deactivated successfully. Sep 12 17:39:22.665024 systemd[1]: cri-containerd-323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9.scope: Consumed 11.902s CPU time. Sep 12 17:39:22.916631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea-rootfs.mount: Deactivated successfully. Sep 12 17:39:22.936641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9-rootfs.mount: Deactivated successfully. Sep 12 17:39:22.997605 containerd[1989]: time="2025-09-12T17:39:22.969108244Z" level=info msg="shim disconnected" id=2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea namespace=k8s.io Sep 12 17:39:23.008408 containerd[1989]: time="2025-09-12T17:39:22.997617782Z" level=warning msg="cleaning up after shim disconnected" id=2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea namespace=k8s.io Sep 12 17:39:23.008408 containerd[1989]: time="2025-09-12T17:39:22.997642277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:23.008408 containerd[1989]: time="2025-09-12T17:39:22.960533715Z" level=info msg="shim disconnected" id=323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9 namespace=k8s.io Sep 12 17:39:23.008408 containerd[1989]: time="2025-09-12T17:39:22.998236040Z" level=warning msg="cleaning up after shim disconnected" id=323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9 namespace=k8s.io Sep 12 17:39:23.008408 containerd[1989]: time="2025-09-12T17:39:22.998251108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:24.200546 kubelet[3280]: I0912 17:39:24.200486 3280 scope.go:117] "RemoveContainer" containerID="2cfb40e37e84732e6947a31272e732ef5c3da734296f90b668324abee7c5e6ea" Sep 12 17:39:24.206540 kubelet[3280]: I0912 17:39:24.206494 3280 scope.go:117] "RemoveContainer" containerID="323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9" Sep 12 17:39:24.328546 containerd[1989]: time="2025-09-12T17:39:24.328486202Z" level=info msg="CreateContainer within sandbox \"5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:39:24.329206 containerd[1989]: time="2025-09-12T17:39:24.328486348Z" level=info msg="CreateContainer within sandbox \"c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:39:24.460192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965645714.mount: Deactivated successfully. Sep 12 17:39:24.466963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411776349.mount: Deactivated successfully. Sep 12 17:39:24.477199 containerd[1989]: time="2025-09-12T17:39:24.476235825Z" level=info msg="CreateContainer within sandbox \"5a5a12e7c35b30d05d3cd6071fb79fcce1bb3a1cc05b13d534bd08bcb4dde8f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"de81e3f71d4eb79aa6e54f2515525820b260bc4210f979d3375d52a405c6fc02\"" Sep 12 17:39:24.478730 containerd[1989]: time="2025-09-12T17:39:24.477985234Z" level=info msg="StartContainer for \"de81e3f71d4eb79aa6e54f2515525820b260bc4210f979d3375d52a405c6fc02\"" Sep 12 17:39:24.479942 containerd[1989]: time="2025-09-12T17:39:24.479912838Z" level=info msg="CreateContainer within sandbox \"c5bb80b8e43b5edc280daf017d1be6ef1e92b9130fa93083e657575d0afa9951\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971\"" Sep 12 17:39:24.480845 containerd[1989]: time="2025-09-12T17:39:24.480527764Z" level=info msg="StartContainer for \"4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971\"" Sep 12 17:39:24.539314 systemd[1]: Started cri-containerd-de81e3f71d4eb79aa6e54f2515525820b260bc4210f979d3375d52a405c6fc02.scope - libcontainer container de81e3f71d4eb79aa6e54f2515525820b260bc4210f979d3375d52a405c6fc02. Sep 12 17:39:24.545835 systemd[1]: Started cri-containerd-4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971.scope - libcontainer container 4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971. Sep 12 17:39:24.622558 containerd[1989]: time="2025-09-12T17:39:24.622459869Z" level=info msg="StartContainer for \"de81e3f71d4eb79aa6e54f2515525820b260bc4210f979d3375d52a405c6fc02\" returns successfully" Sep 12 17:39:24.630671 containerd[1989]: time="2025-09-12T17:39:24.630506828Z" level=info msg="StartContainer for \"4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971\" returns successfully" Sep 12 17:39:27.474765 systemd[1]: cri-containerd-a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd.scope: Deactivated successfully. Sep 12 17:39:27.475733 systemd[1]: cri-containerd-a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd.scope: Consumed 2.645s CPU time, 25.4M memory peak, 0B memory swap peak. Sep 12 17:39:27.513221 containerd[1989]: time="2025-09-12T17:39:27.513134919Z" level=info msg="shim disconnected" id=a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd namespace=k8s.io Sep 12 17:39:27.513221 containerd[1989]: time="2025-09-12T17:39:27.513214641Z" level=warning msg="cleaning up after shim disconnected" id=a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd namespace=k8s.io Sep 12 17:39:27.513221 containerd[1989]: time="2025-09-12T17:39:27.513226734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:27.515202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd-rootfs.mount: Deactivated successfully. Sep 12 17:39:28.201542 kubelet[3280]: I0912 17:39:28.200946 3280 scope.go:117] "RemoveContainer" containerID="a39f43f17d7f9b6e3f1fa4758b8d25b720aab01348be25415d62a61205b800fd" Sep 12 17:39:28.224933 containerd[1989]: time="2025-09-12T17:39:28.224876612Z" level=info msg="CreateContainer within sandbox \"b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:39:28.303384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055263889.mount: Deactivated successfully. Sep 12 17:39:28.334764 containerd[1989]: time="2025-09-12T17:39:28.334713201Z" level=info msg="CreateContainer within sandbox \"b07135b56f2881ad6e4f79bd1cf4474ce8a8ca59e5ae1f5c88a2b732c3e1d7bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7b6539771514d1be6e34b8e129176eb09e27e25c90008265a87dcbbc9899af5d\"" Sep 12 17:39:28.335328 containerd[1989]: time="2025-09-12T17:39:28.335294808Z" level=info msg="StartContainer for \"7b6539771514d1be6e34b8e129176eb09e27e25c90008265a87dcbbc9899af5d\"" Sep 12 17:39:28.379133 systemd[1]: Started cri-containerd-7b6539771514d1be6e34b8e129176eb09e27e25c90008265a87dcbbc9899af5d.scope - libcontainer container 7b6539771514d1be6e34b8e129176eb09e27e25c90008265a87dcbbc9899af5d. Sep 12 17:39:28.427334 containerd[1989]: time="2025-09-12T17:39:28.427287465Z" level=info msg="StartContainer for \"7b6539771514d1be6e34b8e129176eb09e27e25c90008265a87dcbbc9899af5d\" returns successfully" Sep 12 17:39:29.012579 kubelet[3280]: E0912 17:39:29.012439 3280 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-87?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 17:39:36.305351 systemd[1]: cri-containerd-4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971.scope: Deactivated successfully. Sep 12 17:39:36.331201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971-rootfs.mount: Deactivated successfully. Sep 12 17:39:36.354004 containerd[1989]: time="2025-09-12T17:39:36.353931010Z" level=info msg="shim disconnected" id=4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971 namespace=k8s.io Sep 12 17:39:36.354004 containerd[1989]: time="2025-09-12T17:39:36.353983322Z" level=warning msg="cleaning up after shim disconnected" id=4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971 namespace=k8s.io Sep 12 17:39:36.354004 containerd[1989]: time="2025-09-12T17:39:36.353992942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:37.252127 kubelet[3280]: I0912 17:39:37.252080 3280 scope.go:117] "RemoveContainer" containerID="323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9" Sep 12 17:39:37.252585 kubelet[3280]: I0912 17:39:37.252307 3280 scope.go:117] "RemoveContainer" containerID="4c8217d8370b616c8a84d2f3b24c7537f71b042af0edfa3f4cd96c84d33c6971" Sep 12 17:39:37.272138 kubelet[3280]: E0912 17:39:37.265693 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-zc86w_tigera-operator(424bca35-337e-4484-89f8-485a85547a8b)\"" pod="tigera-operator/tigera-operator-755d956888-zc86w" podUID="424bca35-337e-4484-89f8-485a85547a8b" Sep 12 17:39:37.341711 containerd[1989]: time="2025-09-12T17:39:37.341656192Z" level=info msg="RemoveContainer for \"323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9\"" Sep 12 17:39:37.351868 containerd[1989]: time="2025-09-12T17:39:37.351820113Z" level=info msg="RemoveContainer for \"323b0940a495ba133f3303528359d1e4f5b20d6f4c7775040eaf8dab66850ed9\" returns successfully" Sep 12 17:39:39.024082 kubelet[3280]: E0912 17:39:39.023981 3280 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-87?timeout=10s\": context deadline exceeded" Sep 12 17:39:41.958418 systemd[1]: run-containerd-runc-k8s.io-d7860e04976201bb6666980d8e1c4fa48d4fa72ee3178c6a11a9e2da6faccf28-runc.UnYgqs.mount: Deactivated successfully.