Aug 13 07:18:41.907812 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:18:41.907835 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:18:41.907847 kernel: BIOS-provided physical RAM map: Aug 13 07:18:41.907854 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:18:41.907860 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Aug 13 07:18:41.907867 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Aug 13 07:18:41.907875 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Aug 13 07:18:41.907882 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 07:18:41.907889 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 07:18:41.907897 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 07:18:41.907904 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 07:18:41.907911 kernel: NX (Execute Disable) protection: active Aug 13 07:18:41.907918 kernel: APIC: Static calls initialized Aug 13 07:18:41.907924 kernel: efi: EFI v2.7 by EDK II Aug 13 07:18:41.907933 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Aug 13 07:18:41.907943 kernel: SMBIOS 2.7 present. Aug 13 07:18:41.907951 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 13 07:18:41.907959 kernel: Hypervisor detected: KVM Aug 13 07:18:41.907966 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:18:41.907974 kernel: kvm-clock: using sched offset of 3496771478 cycles Aug 13 07:18:41.907982 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:18:41.907990 kernel: tsc: Detected 2499.998 MHz processor Aug 13 07:18:41.907998 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:18:41.908006 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:18:41.908013 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Aug 13 07:18:41.908024 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:18:41.908031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:18:41.908039 kernel: Using GB pages for direct mapping Aug 13 07:18:41.908047 kernel: Secure boot disabled Aug 13 07:18:41.908054 kernel: ACPI: Early table checksum verification disabled Aug 13 07:18:41.908062 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Aug 13 07:18:41.908081 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 07:18:41.908097 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 07:18:41.908105 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 13 07:18:41.908115 kernel: ACPI: FACS 0x00000000789D0000 000040 Aug 13 07:18:41.908123 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 13 07:18:41.908130 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 07:18:41.908138 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 07:18:41.908146 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 13 07:18:41.908153 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 13 07:18:41.908165 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 07:18:41.908175 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 07:18:41.908183 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Aug 13 07:18:41.908192 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Aug 13 07:18:41.908200 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Aug 13 07:18:41.908208 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Aug 13 07:18:41.908217 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Aug 13 07:18:41.908225 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Aug 13 07:18:41.908235 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Aug 13 07:18:41.908243 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Aug 13 07:18:41.908251 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Aug 13 07:18:41.908260 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Aug 13 07:18:41.908268 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Aug 13 07:18:41.908276 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Aug 13 07:18:41.908284 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:18:41.908292 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:18:41.908300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 13 07:18:41.908310 kernel: NUMA: Initialized distance table, cnt=1 Aug 13 07:18:41.908319 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Aug 13 07:18:41.908327 kernel: Zone ranges: Aug 13 07:18:41.908335 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:18:41.908343 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Aug 13 07:18:41.908351 kernel: Normal empty Aug 13 07:18:41.908360 kernel: Movable zone start for each node Aug 13 07:18:41.908368 kernel: Early memory node ranges Aug 13 07:18:41.908376 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:18:41.908386 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Aug 13 07:18:41.908394 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Aug 13 07:18:41.908402 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Aug 13 07:18:41.908410 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:18:41.908419 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:18:41.908427 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 07:18:41.908435 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Aug 13 07:18:41.908443 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 07:18:41.908452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:18:41.908460 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 13 07:18:41.908470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:18:41.908478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:18:41.908487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:18:41.908495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:18:41.908503 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:18:41.908511 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:18:41.908520 kernel: TSC deadline timer available Aug 13 07:18:41.908536 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:18:41.908545 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:18:41.908555 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Aug 13 07:18:41.908564 kernel: Booting paravirtualized kernel on KVM Aug 13 07:18:41.908572 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:18:41.908580 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:18:41.908588 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:18:41.908597 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:18:41.908604 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:18:41.908613 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:18:41.908621 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:18:41.908633 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:18:41.908641 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:18:41.908649 kernel: random: crng init done Aug 13 07:18:41.908657 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:18:41.908665 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:18:41.908674 kernel: Fallback order for Node 0: 0 Aug 13 07:18:41.908682 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Aug 13 07:18:41.908691 kernel: Policy zone: DMA32 Aug 13 07:18:41.908702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:18:41.908710 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 162936K reserved, 0K cma-reserved) Aug 13 07:18:41.908719 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:18:41.908727 kernel: Kernel/User page tables isolation: enabled Aug 13 07:18:41.908735 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:18:41.908744 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:18:41.908752 kernel: Dynamic Preempt: voluntary Aug 13 07:18:41.908760 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:18:41.908769 kernel: rcu: RCU event tracing is enabled. Aug 13 07:18:41.908780 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:18:41.908788 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:18:41.908796 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:18:41.908805 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:18:41.908813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:18:41.908821 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:18:41.908830 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:18:41.908849 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:18:41.908858 kernel: Console: colour dummy device 80x25 Aug 13 07:18:41.908866 kernel: printk: console [tty0] enabled Aug 13 07:18:41.908875 kernel: printk: console [ttyS0] enabled Aug 13 07:18:41.908884 kernel: ACPI: Core revision 20230628 Aug 13 07:18:41.908895 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 13 07:18:41.908904 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:18:41.908912 kernel: x2apic enabled Aug 13 07:18:41.908921 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:18:41.908930 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 13 07:18:41.908942 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Aug 13 07:18:41.908951 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:18:41.908959 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:18:41.908968 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:18:41.908977 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:18:41.908985 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:18:41.908994 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 07:18:41.909003 kernel: RETBleed: Vulnerable Aug 13 07:18:41.909011 kernel: Speculative Store Bypass: Vulnerable Aug 13 07:18:41.909023 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:18:41.909032 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:18:41.909040 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 07:18:41.909049 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:18:41.909058 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:18:41.909083 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:18:41.909095 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:18:41.909103 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 07:18:41.909112 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 07:18:41.909120 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 07:18:41.909129 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 07:18:41.909141 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 07:18:41.909149 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 07:18:41.909158 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:18:41.909167 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 07:18:41.909176 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 07:18:41.909184 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 13 07:18:41.909193 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 13 07:18:41.909202 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 13 07:18:41.909210 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 13 07:18:41.909219 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 13 07:18:41.909228 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:18:41.909236 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:18:41.909248 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:18:41.909256 kernel: landlock: Up and running. Aug 13 07:18:41.909265 kernel: SELinux: Initializing. Aug 13 07:18:41.909274 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:18:41.909283 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:18:41.909291 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 07:18:41.909300 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:18:41.909309 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:18:41.909318 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:18:41.909327 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 07:18:41.909339 kernel: signal: max sigframe size: 3632 Aug 13 07:18:41.909347 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:18:41.909356 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:18:41.909365 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:18:41.909374 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:18:41.909383 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:18:41.909397 kernel: .... node #0, CPUs: #1 Aug 13 07:18:41.909408 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 07:18:41.909418 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:18:41.909430 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:18:41.909439 kernel: smpboot: Max logical packages: 1 Aug 13 07:18:41.909448 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Aug 13 07:18:41.909456 kernel: devtmpfs: initialized Aug 13 07:18:41.909465 kernel: x86/mm: Memory block size: 128MB Aug 13 07:18:41.909474 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Aug 13 07:18:41.909483 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:18:41.909492 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:18:41.909503 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:18:41.909512 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:18:41.909521 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:18:41.909530 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:18:41.909538 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:18:41.909547 kernel: audit: type=2000 audit(1755069521.885:1): state=initialized audit_enabled=0 res=1 Aug 13 07:18:41.909555 kernel: cpuidle: using governor menu Aug 13 07:18:41.909564 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:18:41.909573 kernel: dca service started, version 1.12.1 Aug 13 07:18:41.909584 kernel: PCI: Using configuration type 1 for base access Aug 13 07:18:41.909593 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:18:41.909602 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:18:41.909611 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:18:41.909620 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:18:41.909629 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:18:41.909638 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:18:41.909646 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:18:41.909655 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:18:41.909666 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 07:18:41.909675 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:18:41.909684 kernel: ACPI: Interpreter enabled Aug 13 07:18:41.909692 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:18:41.909701 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:18:41.909710 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:18:41.909719 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:18:41.909728 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:18:41.909736 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:18:41.909901 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:18:41.910037 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:18:41.910158 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:18:41.910171 kernel: acpiphp: Slot [3] registered Aug 13 07:18:41.910180 kernel: acpiphp: Slot [4] registered Aug 13 07:18:41.910189 kernel: acpiphp: Slot [5] registered Aug 13 07:18:41.910198 kernel: acpiphp: Slot [6] registered Aug 13 07:18:41.910207 kernel: acpiphp: Slot [7] registered Aug 13 07:18:41.910220 kernel: acpiphp: Slot [8] registered Aug 13 07:18:41.910229 kernel: acpiphp: Slot [9] registered Aug 13 07:18:41.910238 kernel: acpiphp: Slot [10] registered Aug 13 07:18:41.910247 kernel: acpiphp: Slot [11] registered Aug 13 07:18:41.910256 kernel: acpiphp: Slot [12] registered Aug 13 07:18:41.910265 kernel: acpiphp: Slot [13] registered Aug 13 07:18:41.910274 kernel: acpiphp: Slot [14] registered Aug 13 07:18:41.910282 kernel: acpiphp: Slot [15] registered Aug 13 07:18:41.910291 kernel: acpiphp: Slot [16] registered Aug 13 07:18:41.910303 kernel: acpiphp: Slot [17] registered Aug 13 07:18:41.910312 kernel: acpiphp: Slot [18] registered Aug 13 07:18:41.910321 kernel: acpiphp: Slot [19] registered Aug 13 07:18:41.910329 kernel: acpiphp: Slot [20] registered Aug 13 07:18:41.910338 kernel: acpiphp: Slot [21] registered Aug 13 07:18:41.910347 kernel: acpiphp: Slot [22] registered Aug 13 07:18:41.910355 kernel: acpiphp: Slot [23] registered Aug 13 07:18:41.910364 kernel: acpiphp: Slot [24] registered Aug 13 07:18:41.910373 kernel: acpiphp: Slot [25] registered Aug 13 07:18:41.910382 kernel: acpiphp: Slot [26] registered Aug 13 07:18:41.910393 kernel: acpiphp: Slot [27] registered Aug 13 07:18:41.910402 kernel: acpiphp: Slot [28] registered Aug 13 07:18:41.910410 kernel: acpiphp: Slot [29] registered Aug 13 07:18:41.910419 kernel: acpiphp: Slot [30] registered Aug 13 07:18:41.910428 kernel: acpiphp: Slot [31] registered Aug 13 07:18:41.910436 kernel: PCI host bridge to bus 0000:00 Aug 13 07:18:41.910533 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:18:41.910618 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:18:41.910704 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:18:41.910787 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:18:41.910869 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Aug 13 07:18:41.910952 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:18:41.911057 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:18:41.911182 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:18:41.911286 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 13 07:18:41.911380 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 07:18:41.911480 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 13 07:18:41.911572 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 13 07:18:41.911664 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 13 07:18:41.911758 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 13 07:18:41.911853 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 13 07:18:41.911948 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 13 07:18:41.912049 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 13 07:18:41.912166 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Aug 13 07:18:41.912259 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 07:18:41.912351 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Aug 13 07:18:41.912443 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:18:41.912543 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 07:18:41.912641 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Aug 13 07:18:41.912741 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 07:18:41.912834 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Aug 13 07:18:41.912846 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:18:41.912855 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:18:41.912864 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:18:41.912873 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:18:41.912882 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:18:41.912894 kernel: iommu: Default domain type: Translated Aug 13 07:18:41.912903 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:18:41.912912 kernel: efivars: Registered efivars operations Aug 13 07:18:41.912921 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:18:41.912930 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:18:41.912939 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Aug 13 07:18:41.912947 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Aug 13 07:18:41.913036 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 13 07:18:41.913143 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 13 07:18:41.913240 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:18:41.913252 kernel: vgaarb: loaded Aug 13 07:18:41.913261 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 13 07:18:41.913270 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 13 07:18:41.913279 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:18:41.913288 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:18:41.913297 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:18:41.913306 kernel: pnp: PnP ACPI init Aug 13 07:18:41.913317 kernel: pnp: PnP ACPI: found 5 devices Aug 13 07:18:41.913326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:18:41.913335 kernel: NET: Registered PF_INET protocol family Aug 13 07:18:41.913345 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:18:41.913354 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:18:41.913362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:18:41.913371 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:18:41.913380 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:18:41.913389 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:18:41.913401 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:18:41.913410 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:18:41.913419 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:18:41.913428 kernel: NET: Registered PF_XDP protocol family Aug 13 07:18:41.913518 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:18:41.913604 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:18:41.913690 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:18:41.913774 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:18:41.913857 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Aug 13 07:18:41.913966 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:18:41.913978 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:18:41.913988 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:18:41.913997 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 13 07:18:41.914007 kernel: clocksource: Switched to clocksource tsc Aug 13 07:18:41.914016 kernel: Initialise system trusted keyrings Aug 13 07:18:41.914025 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:18:41.914034 kernel: Key type asymmetric registered Aug 13 07:18:41.914046 kernel: Asymmetric key parser 'x509' registered Aug 13 07:18:41.914055 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:18:41.914073 kernel: io scheduler mq-deadline registered Aug 13 07:18:41.914091 kernel: io scheduler kyber registered Aug 13 07:18:41.914100 kernel: io scheduler bfq registered Aug 13 07:18:41.914109 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:18:41.914118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:18:41.914127 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:18:41.914136 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:18:41.914148 kernel: i8042: Warning: Keylock active Aug 13 07:18:41.914157 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:18:41.914166 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:18:41.914273 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 07:18:41.914362 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 07:18:41.914447 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T07:18:41 UTC (1755069521) Aug 13 07:18:41.914533 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 07:18:41.914544 kernel: intel_pstate: CPU model not supported Aug 13 07:18:41.914557 kernel: efifb: probing for efifb Aug 13 07:18:41.914566 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Aug 13 07:18:41.914574 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Aug 13 07:18:41.914584 kernel: efifb: scrolling: redraw Aug 13 07:18:41.914592 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 07:18:41.914601 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 07:18:41.914610 kernel: fb0: EFI VGA frame buffer device Aug 13 07:18:41.914619 kernel: pstore: Using crash dump compression: deflate Aug 13 07:18:41.914628 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:18:41.914640 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:18:41.914649 kernel: Segment Routing with IPv6 Aug 13 07:18:41.914658 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:18:41.914666 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:18:41.914675 kernel: Key type dns_resolver registered Aug 13 07:18:41.914684 kernel: IPI shorthand broadcast: enabled Aug 13 07:18:41.914713 kernel: sched_clock: Marking stable (488002074, 129539075)->(686220043, -68678894) Aug 13 07:18:41.914724 kernel: registered taskstats version 1 Aug 13 07:18:41.914734 kernel: Loading compiled-in X.509 certificates Aug 13 07:18:41.914745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:18:41.914755 kernel: Key type .fscrypt registered Aug 13 07:18:41.914764 kernel: Key type fscrypt-provisioning registered Aug 13 07:18:41.914774 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:18:41.914783 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:18:41.914792 kernel: ima: No architecture policies found Aug 13 07:18:41.914801 kernel: clk: Disabling unused clocks Aug 13 07:18:41.914811 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:18:41.914821 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:18:41.914833 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:18:41.914842 kernel: Run /init as init process Aug 13 07:18:41.914851 kernel: with arguments: Aug 13 07:18:41.914860 kernel: /init Aug 13 07:18:41.914869 kernel: with environment: Aug 13 07:18:41.914878 kernel: HOME=/ Aug 13 07:18:41.914888 kernel: TERM=linux Aug 13 07:18:41.914897 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:18:41.914909 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:18:41.914926 systemd[1]: Detected virtualization amazon. Aug 13 07:18:41.914936 systemd[1]: Detected architecture x86-64. Aug 13 07:18:41.914946 systemd[1]: Running in initrd. Aug 13 07:18:41.914955 systemd[1]: No hostname configured, using default hostname. Aug 13 07:18:41.914964 systemd[1]: Hostname set to . Aug 13 07:18:41.914975 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:18:41.914984 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:18:41.914996 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:41.915006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:41.915017 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:18:41.915027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:18:41.915037 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:18:41.915047 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:18:41.915061 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:18:41.915135 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:18:41.915145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:41.915155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:41.915165 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:18:41.915178 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:18:41.915188 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:18:41.915198 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:18:41.915208 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:18:41.915218 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:18:41.915228 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:18:41.915238 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:18:41.915248 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:41.915258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:41.915271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:41.915280 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:18:41.915290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:18:41.915300 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:18:41.915310 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:18:41.915320 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:18:41.915330 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:18:41.915339 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:18:41.915349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:41.915386 systemd-journald[178]: Collecting audit messages is disabled. Aug 13 07:18:41.915410 systemd-journald[178]: Journal started Aug 13 07:18:41.915434 systemd-journald[178]: Runtime Journal (/run/log/journal/ec240a2e52bbce5e9ca5f82a38e17aee) is 4.7M, max 38.2M, 33.4M free. Aug 13 07:18:41.918463 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:18:41.920474 systemd-modules-load[179]: Inserted module 'overlay' Aug 13 07:18:41.922826 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:18:41.925610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:41.926318 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:18:41.939809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:18:41.942236 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:18:41.942829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:41.958133 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:18:41.961089 kernel: Bridge firewalling registered Aug 13 07:18:41.961107 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 13 07:18:41.963332 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:18:41.964270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:41.966025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:41.972687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:18:41.974386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:18:41.974999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:41.988401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:41.992242 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:18:41.995808 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:41.998444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:42.002672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:18:42.005109 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:18:42.015982 dracut-cmdline[209]: dracut-dracut-053 Aug 13 07:18:42.020177 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:18:42.058297 systemd-resolved[213]: Positive Trust Anchors: Aug 13 07:18:42.058853 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:18:42.058915 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:18:42.063665 systemd-resolved[213]: Defaulting to hostname 'linux'. Aug 13 07:18:42.065041 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:18:42.068385 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:42.111110 kernel: SCSI subsystem initialized Aug 13 07:18:42.121107 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:18:42.132097 kernel: iscsi: registered transport (tcp) Aug 13 07:18:42.154325 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:18:42.154412 kernel: QLogic iSCSI HBA Driver Aug 13 07:18:42.192971 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:18:42.202372 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:18:42.228260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:18:42.228341 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:18:42.228365 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:18:42.272100 kernel: raid6: avx512x4 gen() 18309 MB/s Aug 13 07:18:42.290095 kernel: raid6: avx512x2 gen() 18131 MB/s Aug 13 07:18:42.308099 kernel: raid6: avx512x1 gen() 18068 MB/s Aug 13 07:18:42.326094 kernel: raid6: avx2x4 gen() 18038 MB/s Aug 13 07:18:42.344098 kernel: raid6: avx2x2 gen() 18014 MB/s Aug 13 07:18:42.362398 kernel: raid6: avx2x1 gen() 13920 MB/s Aug 13 07:18:42.362452 kernel: raid6: using algorithm avx512x4 gen() 18309 MB/s Aug 13 07:18:42.381281 kernel: raid6: .... xor() 7846 MB/s, rmw enabled Aug 13 07:18:42.381336 kernel: raid6: using avx512x2 recovery algorithm Aug 13 07:18:42.403105 kernel: xor: automatically using best checksumming function avx Aug 13 07:18:42.568100 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:18:42.578310 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:18:42.587335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:42.600329 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 13 07:18:42.605594 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:42.613253 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:18:42.635481 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Aug 13 07:18:42.666580 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:18:42.678359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:18:42.730546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:42.737259 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:18:42.766954 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:18:42.769690 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:18:42.772280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:42.773372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:18:42.780313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:18:42.812618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:18:42.834151 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:18:42.863992 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 07:18:42.864326 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 07:18:42.864495 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:18:42.867842 kernel: AES CTR mode by8 optimization enabled Aug 13 07:18:42.868525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:18:42.868848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:42.871981 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 13 07:18:42.872712 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:18:42.873187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:42.884212 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:11:2b:34:3a:d5 Aug 13 07:18:42.873509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:42.874244 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:42.885813 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Aug 13 07:18:42.888477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:42.892910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:42.893046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:42.911046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:42.923833 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 07:18:42.924206 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:18:42.926725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:42.934884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:18:42.941091 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 07:18:42.954299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:18:42.954363 kernel: GPT:9289727 != 16777215 Aug 13 07:18:42.954385 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:18:42.954397 kernel: GPT:9289727 != 16777215 Aug 13 07:18:42.954408 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:18:42.954420 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 07:18:42.956836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:43.021097 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (455) Aug 13 07:18:43.038097 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (446) Aug 13 07:18:43.089614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 07:18:43.104028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 07:18:43.120497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 07:18:43.121059 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 07:18:43.128529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 07:18:43.135261 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:18:43.142460 disk-uuid[631]: Primary Header is updated. Aug 13 07:18:43.142460 disk-uuid[631]: Secondary Entries is updated. Aug 13 07:18:43.142460 disk-uuid[631]: Secondary Header is updated. Aug 13 07:18:43.147095 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 07:18:43.154097 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 07:18:43.168097 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 07:18:44.175167 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 07:18:44.176149 disk-uuid[632]: The operation has completed successfully. Aug 13 07:18:44.319949 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:18:44.320138 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:18:44.336301 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:18:44.341994 sh[973]: Success Aug 13 07:18:44.357093 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:18:44.463328 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:18:44.477201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:18:44.479078 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:18:44.523989 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:18:44.524060 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:44.524096 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:18:44.527477 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:18:44.527544 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:18:44.559097 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 07:18:44.572749 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:18:44.573846 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:18:44.584286 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:18:44.588240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:18:44.612124 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:44.612185 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:44.614664 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 07:18:44.621097 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 07:18:44.632571 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:18:44.635793 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:44.642421 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:18:44.649404 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:18:44.702575 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:44.707285 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:18:44.732978 systemd-networkd[1165]: lo: Link UP Aug 13 07:18:44.732993 systemd-networkd[1165]: lo: Gained carrier Aug 13 07:18:44.734709 systemd-networkd[1165]: Enumeration completed Aug 13 07:18:44.735139 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:18:44.735144 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:18:44.736485 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:18:44.738933 systemd[1]: Reached target network.target - Network. Aug 13 07:18:44.739390 systemd-networkd[1165]: eth0: Link UP Aug 13 07:18:44.739396 systemd-networkd[1165]: eth0: Gained carrier Aug 13 07:18:44.739408 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:18:44.753165 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.17.50/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 07:18:44.807692 ignition[1111]: Ignition 2.19.0 Aug 13 07:18:44.807704 ignition[1111]: Stage: fetch-offline Aug 13 07:18:44.807916 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:44.807926 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:44.808493 ignition[1111]: Ignition finished successfully Aug 13 07:18:44.809601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:44.815268 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:18:44.830280 ignition[1174]: Ignition 2.19.0 Aug 13 07:18:44.830294 ignition[1174]: Stage: fetch Aug 13 07:18:44.830752 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:44.830767 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:44.830884 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:44.840989 ignition[1174]: PUT result: OK Aug 13 07:18:44.843095 ignition[1174]: parsed url from cmdline: "" Aug 13 07:18:44.843105 ignition[1174]: no config URL provided Aug 13 07:18:44.843112 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:18:44.843124 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:18:44.843140 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:44.844062 ignition[1174]: PUT result: OK Aug 13 07:18:44.844113 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 07:18:44.844976 ignition[1174]: GET result: OK Aug 13 07:18:44.845039 ignition[1174]: parsing config with SHA512: 1dc275628e1494344f8c1b54dea2fe242686896dcf841d32db01902876aded9e8bade7dc3955c5eb015f0c55ec59acae89ba5be2a3e846334af157e1e02191a0 Aug 13 07:18:44.848950 unknown[1174]: fetched base config from "system" Aug 13 07:18:44.849181 unknown[1174]: fetched base config from "system" Aug 13 07:18:44.849188 unknown[1174]: fetched user config from "aws" Aug 13 07:18:44.849808 ignition[1174]: fetch: fetch complete Aug 13 07:18:44.849813 ignition[1174]: fetch: fetch passed Aug 13 07:18:44.849867 ignition[1174]: Ignition finished successfully Aug 13 07:18:44.851963 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:18:44.855268 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:18:44.871555 ignition[1180]: Ignition 2.19.0 Aug 13 07:18:44.871564 ignition[1180]: Stage: kargs Aug 13 07:18:44.871924 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:44.871933 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:44.872026 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:44.872967 ignition[1180]: PUT result: OK Aug 13 07:18:44.875781 ignition[1180]: kargs: kargs passed Aug 13 07:18:44.875842 ignition[1180]: Ignition finished successfully Aug 13 07:18:44.877436 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:18:44.882297 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:18:44.897390 ignition[1186]: Ignition 2.19.0 Aug 13 07:18:44.897401 ignition[1186]: Stage: disks Aug 13 07:18:44.897760 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:44.897770 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:44.897849 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:44.898881 ignition[1186]: PUT result: OK Aug 13 07:18:44.905368 ignition[1186]: disks: disks passed Aug 13 07:18:44.905438 ignition[1186]: Ignition finished successfully Aug 13 07:18:44.906659 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:18:44.907523 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:44.907880 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:18:44.908397 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:44.908920 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:18:44.909479 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:18:44.914248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:18:44.943475 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:18:44.946397 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:18:44.951220 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:18:45.054190 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:18:45.054947 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:18:45.055961 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:45.062216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:45.066980 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:18:45.069016 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:18:45.070412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:18:45.070453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:45.086522 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:18:45.087266 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1214) Aug 13 07:18:45.091110 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:45.095080 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:45.095146 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 07:18:45.097279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:18:45.102144 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 07:18:45.103959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:45.188790 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:18:45.216193 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:18:45.221096 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:18:45.225762 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:18:45.371775 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:45.379266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:18:45.382286 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:18:45.392103 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:45.416493 ignition[1326]: INFO : Ignition 2.19.0 Aug 13 07:18:45.417507 ignition[1326]: INFO : Stage: mount Aug 13 07:18:45.418748 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:45.419771 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:45.419771 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:45.420857 ignition[1326]: INFO : PUT result: OK Aug 13 07:18:45.424163 ignition[1326]: INFO : mount: mount passed Aug 13 07:18:45.426183 ignition[1326]: INFO : Ignition finished successfully Aug 13 07:18:45.427047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:18:45.435293 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:18:45.438026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:18:45.520517 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:18:45.525303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:18:45.547325 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1338) Aug 13 07:18:45.551311 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:18:45.551383 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:18:45.551399 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 07:18:45.558098 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 07:18:45.560237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:18:45.580521 ignition[1355]: INFO : Ignition 2.19.0 Aug 13 07:18:45.580521 ignition[1355]: INFO : Stage: files Aug 13 07:18:45.581692 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:45.581692 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:45.581692 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:45.582898 ignition[1355]: INFO : PUT result: OK Aug 13 07:18:45.584703 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:18:45.586292 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:18:45.586292 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:18:45.591521 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:18:45.592303 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:18:45.592303 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:18:45.591969 unknown[1355]: wrote ssh authorized keys file for user: core Aug 13 07:18:45.594137 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:18:45.594137 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:18:45.663026 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:18:45.900886 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:45.902109 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:18:45.907237 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:45.907237 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:18:45.907237 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:45.907237 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:45.907237 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:45.907237 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:18:45.961222 systemd-networkd[1165]: eth0: Gained IPv6LL Aug 13 07:18:46.341950 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:18:46.625449 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:18:46.625449 ignition[1355]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:18:46.627819 ignition[1355]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:18:46.628781 ignition[1355]: INFO : files: files passed Aug 13 07:18:46.628781 ignition[1355]: INFO : Ignition finished successfully Aug 13 07:18:46.630458 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:18:46.636417 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:18:46.640235 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:18:46.643604 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:18:46.643701 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:18:46.653770 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:46.653770 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:46.656390 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:18:46.657285 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:46.658625 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:18:46.663289 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:18:46.692718 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:18:46.692848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:18:46.694591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:18:46.695586 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:18:46.696562 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:18:46.702294 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:18:46.716588 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:46.721288 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:18:46.734193 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:46.735176 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:46.736155 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:18:46.737000 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:18:46.737243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:18:46.738510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:18:46.739466 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:18:46.740302 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:18:46.741093 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:18:46.741864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:18:46.742761 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:18:46.743545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:18:46.744340 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:18:46.745502 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:18:46.746326 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:18:46.747035 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:18:46.747235 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:18:46.748325 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:46.749130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:46.749798 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:18:46.750061 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:46.750636 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:18:46.750806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:18:46.752205 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:18:46.752390 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:18:46.753101 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:18:46.753259 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:18:46.760424 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:18:46.763397 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:18:46.763963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:18:46.764232 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:46.768492 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:18:46.768716 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:18:46.783145 ignition[1407]: INFO : Ignition 2.19.0 Aug 13 07:18:46.783145 ignition[1407]: INFO : Stage: umount Aug 13 07:18:46.783145 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:18:46.783145 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 07:18:46.783145 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 07:18:46.787296 ignition[1407]: INFO : PUT result: OK Aug 13 07:18:46.787466 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:18:46.787751 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:18:46.792893 ignition[1407]: INFO : umount: umount passed Aug 13 07:18:46.792893 ignition[1407]: INFO : Ignition finished successfully Aug 13 07:18:46.794313 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:18:46.794455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:18:46.795843 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:18:46.795978 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:18:46.798253 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:18:46.798325 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:18:46.798769 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:18:46.798830 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:18:46.799718 systemd[1]: Stopped target network.target - Network. Aug 13 07:18:46.800181 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:18:46.800250 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:18:46.800740 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:18:46.802480 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:18:46.804132 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:46.804946 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:18:46.805403 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:18:46.805892 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:18:46.805956 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:18:46.806566 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:18:46.806619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:18:46.807168 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:18:46.807247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:18:46.808296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:18:46.808358 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:18:46.809105 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:18:46.809735 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:18:46.812131 systemd-networkd[1165]: eth0: DHCPv6 lease lost Aug 13 07:18:46.815287 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:18:46.816287 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:18:46.816434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:18:46.819146 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:18:46.819308 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:18:46.821332 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:18:46.821399 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:46.827192 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:18:46.827770 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:18:46.827854 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:18:46.828500 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:18:46.828556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:46.831153 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:18:46.831214 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:46.831986 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:18:46.832045 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:46.832795 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:46.847230 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:18:46.847383 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:18:46.849769 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:18:46.850045 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:46.851209 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:18:46.851268 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:46.852294 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:18:46.852344 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:46.853016 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:18:46.853102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:18:46.854596 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:18:46.854660 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:18:46.855787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:18:46.855846 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:18:46.864276 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:18:46.864747 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:18:46.864840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:46.866220 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:18:46.866289 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:46.867627 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:18:46.867685 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:46.869174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:18:46.869233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:46.875159 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:18:46.875298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:18:46.960564 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:18:46.960677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:18:46.961747 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:18:46.962474 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:18:46.962536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:18:46.969320 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:18:46.978461 systemd[1]: Switching root. Aug 13 07:18:47.007905 systemd-journald[178]: Journal stopped Aug 13 07:18:48.310518 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 13 07:18:48.310607 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:18:48.310631 kernel: SELinux: policy capability open_perms=1 Aug 13 07:18:48.310643 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:18:48.310660 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:18:48.310672 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:18:48.310684 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:18:48.310701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:18:48.310713 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:18:48.310725 kernel: audit: type=1403 audit(1755069527.286:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:18:48.310738 systemd[1]: Successfully loaded SELinux policy in 61.745ms. Aug 13 07:18:48.310764 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.638ms. Aug 13 07:18:48.310778 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:18:48.310791 systemd[1]: Detected virtualization amazon. Aug 13 07:18:48.310804 systemd[1]: Detected architecture x86-64. Aug 13 07:18:48.310816 systemd[1]: Detected first boot. Aug 13 07:18:48.310829 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:18:48.310841 zram_generator::config[1450]: No configuration found. Aug 13 07:18:48.310858 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:18:48.310871 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:18:48.310884 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:18:48.310897 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:18:48.310910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:18:48.310923 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:18:48.310936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:18:48.310948 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:18:48.310961 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:18:48.310976 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:18:48.310989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:18:48.311001 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:18:48.311014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:18:48.311028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:18:48.311040 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:18:48.311052 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:18:48.311076 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:18:48.311092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:18:48.311105 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:18:48.311117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:18:48.311130 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:18:48.311142 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:18:48.311155 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:18:48.311169 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:18:48.311181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:18:48.311199 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:18:48.311211 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:18:48.311224 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:18:48.311236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:18:48.311248 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:18:48.311260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:18:48.311273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:18:48.311285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:18:48.311297 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:18:48.311313 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:18:48.311325 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:18:48.311337 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:18:48.311349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:48.311361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:18:48.311374 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:18:48.311386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:18:48.311399 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:18:48.311412 systemd[1]: Reached target machines.target - Containers. Aug 13 07:18:48.311426 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:18:48.311444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:18:48.311457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:18:48.311470 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:18:48.311482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:18:48.311494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:18:48.311506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:18:48.311519 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:18:48.311534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:18:48.311546 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:18:48.311558 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:18:48.311571 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:18:48.311584 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:18:48.311595 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:18:48.311608 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:18:48.311620 kernel: loop: module loaded Aug 13 07:18:48.311632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:18:48.311647 kernel: fuse: init (API version 7.39) Aug 13 07:18:48.311659 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:18:48.311671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:18:48.311683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:18:48.311695 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:18:48.311709 systemd[1]: Stopped verity-setup.service. Aug 13 07:18:48.311721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:48.311733 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:18:48.311746 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:18:48.311791 systemd-journald[1542]: Collecting audit messages is disabled. Aug 13 07:18:48.311816 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:18:48.311845 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:18:48.311858 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:18:48.311870 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:18:48.311883 systemd-journald[1542]: Journal started Aug 13 07:18:48.311910 systemd-journald[1542]: Runtime Journal (/run/log/journal/ec240a2e52bbce5e9ca5f82a38e17aee) is 4.7M, max 38.2M, 33.4M free. Aug 13 07:18:48.007718 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:18:48.314794 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:18:48.314818 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:18:48.034684 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 07:18:48.035160 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:18:48.315386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:18:48.316374 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:18:48.316505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:18:48.319158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:18:48.319302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:18:48.319952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:18:48.320113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:18:48.320710 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:18:48.320835 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:18:48.321485 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:18:48.321608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:18:48.322305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:18:48.322849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:18:48.329128 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:18:48.340827 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:18:48.347098 kernel: ACPI: bus type drm_connector registered Aug 13 07:18:48.350363 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:18:48.356189 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:18:48.358302 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:18:48.358343 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:18:48.360079 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:18:48.370757 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:18:48.374224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:18:48.375277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:18:48.382734 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:18:48.387635 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:18:48.389180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:18:48.390233 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:18:48.390688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:18:48.391886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:18:48.406270 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:18:48.409249 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:18:48.412339 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:18:48.413099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:18:48.414807 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:18:48.415335 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:18:48.420571 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:18:48.442177 systemd-journald[1542]: Time spent on flushing to /var/log/journal/ec240a2e52bbce5e9ca5f82a38e17aee is 71.764ms for 985 entries. Aug 13 07:18:48.442177 systemd-journald[1542]: System Journal (/var/log/journal/ec240a2e52bbce5e9ca5f82a38e17aee) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:18:48.532241 systemd-journald[1542]: Received client request to flush runtime journal. Aug 13 07:18:48.532295 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:18:48.532325 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:18:48.443733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:18:48.453506 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:18:48.455394 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:18:48.456500 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:18:48.463758 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:18:48.480606 udevadm[1585]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:18:48.512937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:18:48.520315 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Aug 13 07:18:48.520330 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Aug 13 07:18:48.526320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:18:48.530583 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:18:48.536455 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:18:48.547388 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:18:48.547989 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:18:48.562259 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 07:18:48.602732 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:18:48.610260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:18:48.619095 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:18:48.625283 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Aug 13 07:18:48.625608 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Aug 13 07:18:48.630910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:18:48.680096 kernel: loop3: detected capacity change from 0 to 61336 Aug 13 07:18:48.752981 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:18:48.801347 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:18:48.846637 kernel: loop6: detected capacity change from 0 to 142488 Aug 13 07:18:48.880092 kernel: loop7: detected capacity change from 0 to 61336 Aug 13 07:18:48.900523 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 07:18:48.901829 (sd-merge)[1609]: Merged extensions into '/usr'. Aug 13 07:18:48.908348 systemd[1]: Reloading requested from client PID 1578 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:18:48.908476 systemd[1]: Reloading... Aug 13 07:18:48.987094 zram_generator::config[1631]: No configuration found. Aug 13 07:18:49.260851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:49.347622 systemd[1]: Reloading finished in 438 ms. Aug 13 07:18:49.380103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:18:49.381160 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:18:49.390298 systemd[1]: Starting ensure-sysext.service... Aug 13 07:18:49.394295 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:18:49.398203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:18:49.411218 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:18:49.411237 systemd[1]: Reloading... Aug 13 07:18:49.416098 ldconfig[1573]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:18:49.441456 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:18:49.442017 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:18:49.443460 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:18:49.443914 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Aug 13 07:18:49.444006 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Aug 13 07:18:49.452705 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:18:49.454475 systemd-tmpfiles[1688]: Skipping /boot Aug 13 07:18:49.472698 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:18:49.472850 systemd-tmpfiles[1688]: Skipping /boot Aug 13 07:18:49.485917 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Aug 13 07:18:49.548918 zram_generator::config[1715]: No configuration found. Aug 13 07:18:49.639283 (udev-worker)[1728]: Network interface NamePolicy= disabled on kernel command line. Aug 13 07:18:49.741104 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 07:18:49.782233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:18:49.788155 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:18:49.790217 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 07:18:49.800123 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 13 07:18:49.803098 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1727) Aug 13 07:18:49.825097 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 07:18:49.889908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:18:50.027088 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:18:50.043049 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:18:50.043215 systemd[1]: Reloading finished in 631 ms. Aug 13 07:18:50.059044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:18:50.060196 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:18:50.064714 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:18:50.094479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:18:50.095299 systemd[1]: Finished ensure-sysext.service. Aug 13 07:18:50.118430 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 07:18:50.119231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:50.124276 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:18:50.129275 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:18:50.131982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:18:50.134325 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:18:50.138538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:18:50.141780 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:18:50.145347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:18:50.153010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:18:50.154443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:18:50.162176 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:18:50.167310 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:18:50.186371 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:18:50.201166 lvm[1884]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:18:50.198196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:18:50.199247 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:18:50.225016 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:18:50.232274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:18:50.233207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:18:50.238486 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:18:50.239563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:18:50.240911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:18:50.243270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:18:50.243505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:18:50.244622 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:18:50.244810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:18:50.256220 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:18:50.258125 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:18:50.258454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:18:50.264260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:18:50.274947 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:18:50.275760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:18:50.276810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:18:50.285656 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:18:50.302828 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:18:50.308451 lvm[1913]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:18:50.317617 augenrules[1920]: No rules Aug 13 07:18:50.318589 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:18:50.320399 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:18:50.335887 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:18:50.369293 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:18:50.375107 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:18:50.376651 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:18:50.412099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:18:50.412969 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:18:50.421955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:18:50.467295 systemd-networkd[1896]: lo: Link UP Aug 13 07:18:50.467310 systemd-networkd[1896]: lo: Gained carrier Aug 13 07:18:50.469083 systemd-networkd[1896]: Enumeration completed Aug 13 07:18:50.469232 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:18:50.471311 systemd-networkd[1896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:18:50.471316 systemd-networkd[1896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:18:50.476656 systemd-networkd[1896]: eth0: Link UP Aug 13 07:18:50.476837 systemd-networkd[1896]: eth0: Gained carrier Aug 13 07:18:50.476864 systemd-networkd[1896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:18:50.478292 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:18:50.487240 systemd-networkd[1896]: eth0: DHCPv4 address 172.31.17.50/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 07:18:50.492274 systemd-resolved[1897]: Positive Trust Anchors: Aug 13 07:18:50.492561 systemd-resolved[1897]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:18:50.492603 systemd-resolved[1897]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:18:50.498835 systemd-resolved[1897]: Defaulting to hostname 'linux'. Aug 13 07:18:50.500718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:18:50.501274 systemd[1]: Reached target network.target - Network. Aug 13 07:18:50.501696 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:18:50.502189 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:18:50.502666 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:18:50.503124 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:18:50.503639 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:18:50.504155 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:18:50.504558 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:18:50.504920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:18:50.504964 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:18:50.505344 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:18:50.507446 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:18:50.509161 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:18:50.514991 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:18:50.516040 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:18:50.516518 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:18:50.516854 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:18:50.517234 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:18:50.517263 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:18:50.518444 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:18:50.522235 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:18:50.525306 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:18:50.527138 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:18:50.531257 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:18:50.531652 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:18:50.534339 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:18:50.541339 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 07:18:50.546196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:18:50.556360 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 07:18:50.560706 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:18:50.562482 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:18:50.572645 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:18:50.574968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:18:50.575867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:18:50.583507 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:18:50.590253 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:18:50.600369 jq[1947]: false Aug 13 07:18:50.600514 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:18:50.600933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:18:50.622565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:18:50.623411 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:18:50.632440 jq[1961]: true Aug 13 07:18:50.653345 coreos-metadata[1945]: Aug 13 07:18:50.653 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 07:18:50.660362 (ntainerd)[1979]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:18:50.671803 tar[1971]: linux-amd64/helm Aug 13 07:18:50.687136 coreos-metadata[1945]: Aug 13 07:18:50.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 07:18:50.697836 ntpd[1950]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:10 UTC 2025 (1): Starting Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: ---------------------------------------------------- Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: corporation. Support and training for ntp-4 are Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: available at https://www.nwtime.org/support Aug 13 07:18:50.698350 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: ---------------------------------------------------- Aug 13 07:18:50.697869 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 07:18:50.697877 ntpd[1950]: ---------------------------------------------------- Aug 13 07:18:50.697884 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Aug 13 07:18:50.697890 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 07:18:50.697897 ntpd[1950]: corporation. Support and training for ntp-4 are Aug 13 07:18:50.697904 ntpd[1950]: available at https://www.nwtime.org/support Aug 13 07:18:50.697912 ntpd[1950]: ---------------------------------------------------- Aug 13 07:18:50.704685 coreos-metadata[1945]: Aug 13 07:18:50.701 INFO Fetch successful Aug 13 07:18:50.704685 coreos-metadata[1945]: Aug 13 07:18:50.701 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 07:18:50.704784 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: proto: precision = 0.054 usec (-24) Aug 13 07:18:50.704784 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: basedate set to 2025-07-31 Aug 13 07:18:50.704784 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: gps base set to 2025-08-03 (week 2378) Aug 13 07:18:50.703560 ntpd[1950]: proto: precision = 0.054 usec (-24) Aug 13 07:18:50.703810 ntpd[1950]: basedate set to 2025-07-31 Aug 13 07:18:50.703819 ntpd[1950]: gps base set to 2025-08-03 (week 2378) Aug 13 07:18:50.706549 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 07:18:50.708968 coreos-metadata[1945]: Aug 13 07:18:50.708 INFO Fetch successful Aug 13 07:18:50.708968 coreos-metadata[1945]: Aug 13 07:18:50.708 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 07:18:50.709873 coreos-metadata[1945]: Aug 13 07:18:50.709 INFO Fetch successful Aug 13 07:18:50.709873 coreos-metadata[1945]: Aug 13 07:18:50.709 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 07:18:50.709960 extend-filesystems[1948]: Found loop4 Aug 13 07:18:50.709960 extend-filesystems[1948]: Found loop5 Aug 13 07:18:50.711121 coreos-metadata[1945]: Aug 13 07:18:50.709 INFO Fetch successful Aug 13 07:18:50.711121 coreos-metadata[1945]: Aug 13 07:18:50.709 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found loop6 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found loop7 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p1 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p2 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p3 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found usr Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p4 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p6 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p7 Aug 13 07:18:50.712218 extend-filesystems[1948]: Found nvme0n1p9 Aug 13 07:18:50.712218 extend-filesystems[1948]: Checking size of /dev/nvme0n1p9 Aug 13 07:18:50.723950 update_engine[1959]: I20250813 07:18:50.714449 1959 main.cc:92] Flatcar Update Engine starting Aug 13 07:18:50.729157 jq[1977]: true Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.714 INFO Fetch failed with 404: resource not found Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.717 INFO Fetch successful Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.717 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.718 INFO Fetch successful Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.723 INFO Fetch successful Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.723 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.728 INFO Fetch successful Aug 13 07:18:50.729230 coreos-metadata[1945]: Aug 13 07:18:50.728 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Listen normally on 3 eth0 172.31.17.50:123 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Listen normally on 4 lo [::1]:123 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: bind(21) AF_INET6 fe80::411:2bff:fe34:3ad5%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: unable to create socket on eth0 (5) for fe80::411:2bff:fe34:3ad5%2#123 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: failed to init interface for address fe80::411:2bff:fe34:3ad5%2 Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: Listening on routing socket on fd #21 for interface updates Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:18:50.729458 ntpd[1950]: 13 Aug 07:18:50 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:18:50.715818 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 07:18:50.728034 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:18:50.715861 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 07:18:50.716034 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 07:18:50.716061 ntpd[1950]: Listen normally on 3 eth0 172.31.17.50:123 Aug 13 07:18:50.716117 ntpd[1950]: Listen normally on 4 lo [::1]:123 Aug 13 07:18:50.716151 ntpd[1950]: bind(21) AF_INET6 fe80::411:2bff:fe34:3ad5%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 07:18:50.716166 ntpd[1950]: unable to create socket on eth0 (5) for fe80::411:2bff:fe34:3ad5%2#123 Aug 13 07:18:50.716178 ntpd[1950]: failed to init interface for address fe80::411:2bff:fe34:3ad5%2 Aug 13 07:18:50.716201 ntpd[1950]: Listening on routing socket on fd #21 for interface updates Aug 13 07:18:50.723805 dbus-daemon[1946]: [system] SELinux support is enabled Aug 13 07:18:50.727280 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:18:50.734137 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:18:50.734831 coreos-metadata[1945]: Aug 13 07:18:50.732 INFO Fetch successful Aug 13 07:18:50.727309 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 07:18:50.734170 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:18:50.731224 dbus-daemon[1946]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1896 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 07:18:50.735043 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:18:50.735085 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:18:50.742236 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:18:50.742406 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:18:50.747972 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:18:50.764176 update_engine[1959]: I20250813 07:18:50.763669 1959 update_check_scheduler.cc:74] Next update check in 10m56s Aug 13 07:18:50.763858 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 07:18:50.764668 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:18:50.773261 extend-filesystems[1948]: Resized partition /dev/nvme0n1p9 Aug 13 07:18:50.776660 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:18:50.779114 extend-filesystems[2002]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:18:50.787895 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 07:18:50.860126 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1721) Aug 13 07:18:50.903779 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:18:50.904996 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:18:50.909421 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 07:18:50.924671 extend-filesystems[2002]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 07:18:50.924671 extend-filesystems[2002]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:18:50.924671 extend-filesystems[2002]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 07:18:50.933598 systemd-logind[1957]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:18:50.933616 systemd-logind[1957]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 13 07:18:50.938955 bash[2025]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:18:50.939062 extend-filesystems[1948]: Resized filesystem in /dev/nvme0n1p9 Aug 13 07:18:50.933634 systemd-logind[1957]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:18:50.936937 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:18:50.937138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:18:50.938636 systemd-logind[1957]: New seat seat0. Aug 13 07:18:50.942112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:18:50.955253 systemd[1]: Starting sshkeys.service... Aug 13 07:18:50.956553 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:18:51.006739 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:18:51.019549 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:18:51.027981 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 07:18:51.029563 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 07:18:51.034609 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 07:18:51.039388 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 07:18:51.076584 locksmithd[2001]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:18:51.104059 polkitd[2103]: Started polkitd version 121 Aug 13 07:18:51.141930 polkitd[2103]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 07:18:51.142005 polkitd[2103]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 07:18:51.146142 polkitd[2103]: Finished loading, compiling and executing 2 rules Aug 13 07:18:51.148573 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 07:18:51.151604 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 07:18:51.155571 polkitd[2103]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 07:18:51.187700 coreos-metadata[2083]: Aug 13 07:18:51.187 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 07:18:51.189396 coreos-metadata[2083]: Aug 13 07:18:51.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 07:18:51.189864 coreos-metadata[2083]: Aug 13 07:18:51.189 INFO Fetch successful Aug 13 07:18:51.189995 coreos-metadata[2083]: Aug 13 07:18:51.189 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 07:18:51.190966 coreos-metadata[2083]: Aug 13 07:18:51.190 INFO Fetch successful Aug 13 07:18:51.193045 unknown[2083]: wrote ssh authorized keys file for user: core Aug 13 07:18:51.207342 systemd-resolved[1897]: System hostname changed to 'ip-172-31-17-50'. Aug 13 07:18:51.207557 systemd-hostnamed[2000]: Hostname set to (transient) Aug 13 07:18:51.229932 update-ssh-keys[2142]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:18:51.232517 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:18:51.236645 systemd[1]: Finished sshkeys.service. Aug 13 07:18:51.268338 containerd[1979]: time="2025-08-13T07:18:51.268256930Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:18:51.341092 containerd[1979]: time="2025-08-13T07:18:51.338528573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.341380 containerd[1979]: time="2025-08-13T07:18:51.341349829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:18:51.341444 containerd[1979]: time="2025-08-13T07:18:51.341433885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:18:51.341491 containerd[1979]: time="2025-08-13T07:18:51.341481513Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:18:51.341675 containerd[1979]: time="2025-08-13T07:18:51.341662072Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:18:51.341745 containerd[1979]: time="2025-08-13T07:18:51.341734583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343167938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343190378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343358905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343371811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343383085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343392341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343454804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343636568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343743231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343755723Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:18:51.343899 containerd[1979]: time="2025-08-13T07:18:51.343824574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:18:51.344186 containerd[1979]: time="2025-08-13T07:18:51.343862742Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:18:51.349434 containerd[1979]: time="2025-08-13T07:18:51.349403301Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:18:51.349556 containerd[1979]: time="2025-08-13T07:18:51.349542311Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:18:51.349622 containerd[1979]: time="2025-08-13T07:18:51.349612045Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:18:51.349678 containerd[1979]: time="2025-08-13T07:18:51.349666803Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:18:51.349724 containerd[1979]: time="2025-08-13T07:18:51.349715520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:18:51.349899 containerd[1979]: time="2025-08-13T07:18:51.349884053Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:18:51.351219 containerd[1979]: time="2025-08-13T07:18:51.351201551Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:18:51.351382 containerd[1979]: time="2025-08-13T07:18:51.351368531Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:18:51.351435 containerd[1979]: time="2025-08-13T07:18:51.351425826Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:18:51.351479 containerd[1979]: time="2025-08-13T07:18:51.351470697Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:18:51.351523 containerd[1979]: time="2025-08-13T07:18:51.351514131Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351567 containerd[1979]: time="2025-08-13T07:18:51.351557595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351623 containerd[1979]: time="2025-08-13T07:18:51.351613516Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351668 containerd[1979]: time="2025-08-13T07:18:51.351659246Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351712 containerd[1979]: time="2025-08-13T07:18:51.351704001Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351764 containerd[1979]: time="2025-08-13T07:18:51.351755818Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351806 containerd[1979]: time="2025-08-13T07:18:51.351798416Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351852 containerd[1979]: time="2025-08-13T07:18:51.351843908Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:18:51.351907 containerd[1979]: time="2025-08-13T07:18:51.351897904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.351952 containerd[1979]: time="2025-08-13T07:18:51.351943260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.352108 containerd[1979]: time="2025-08-13T07:18:51.352097338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.352169 containerd[1979]: time="2025-08-13T07:18:51.352160245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.352218 containerd[1979]: time="2025-08-13T07:18:51.352209450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354097088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354119334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354132174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354144958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354160797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354173388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354186080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354199077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354219402Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354242117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354253915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354264039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354324311Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:18:51.355095 containerd[1979]: time="2025-08-13T07:18:51.354343405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:18:51.355412 containerd[1979]: time="2025-08-13T07:18:51.354354114Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:18:51.355412 containerd[1979]: time="2025-08-13T07:18:51.354365783Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:18:51.355412 containerd[1979]: time="2025-08-13T07:18:51.354375823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355412 containerd[1979]: time="2025-08-13T07:18:51.354387547Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:18:51.355412 containerd[1979]: time="2025-08-13T07:18:51.354408127Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:18:51.355412 containerd[1979]: time="2025-08-13T07:18:51.354424254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:18:51.355539 containerd[1979]: time="2025-08-13T07:18:51.354684914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:18:51.355539 containerd[1979]: time="2025-08-13T07:18:51.354737813Z" level=info msg="Connect containerd service" Aug 13 07:18:51.355539 containerd[1979]: time="2025-08-13T07:18:51.354769205Z" level=info msg="using legacy CRI server" Aug 13 07:18:51.355539 containerd[1979]: time="2025-08-13T07:18:51.354776049Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:18:51.355539 containerd[1979]: time="2025-08-13T07:18:51.354872002Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:18:51.358503 containerd[1979]: time="2025-08-13T07:18:51.358283863Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:18:51.358503 containerd[1979]: time="2025-08-13T07:18:51.358433324Z" level=info msg="Start subscribing containerd event" Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.358742940Z" level=info msg="Start recovering state" Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.358826258Z" level=info msg="Start event monitor" Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.358849163Z" level=info msg="Start snapshots syncer" Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.358860660Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.358869195Z" level=info msg="Start streaming server" Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.358947730Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:18:51.359091 containerd[1979]: time="2025-08-13T07:18:51.359003322Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:18:51.359263 containerd[1979]: time="2025-08-13T07:18:51.359237607Z" level=info msg="containerd successfully booted in 0.092577s" Aug 13 07:18:51.360188 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:18:51.374286 sshd_keygen[1976]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:18:51.399687 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:18:51.408472 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:18:51.416011 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:18:51.416217 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:18:51.424484 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:18:51.435048 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:18:51.442371 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:18:51.444156 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:18:51.444873 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:18:51.593202 systemd-networkd[1896]: eth0: Gained IPv6LL Aug 13 07:18:51.598104 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:18:51.599597 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:18:51.600843 tar[1971]: linux-amd64/LICENSE Aug 13 07:18:51.601021 tar[1971]: linux-amd64/README.md Aug 13 07:18:51.609459 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 07:18:51.612272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:18:51.618400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:18:51.648810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:18:51.671512 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:18:51.700881 amazon-ssm-agent[2165]: Initializing new seelog logger Aug 13 07:18:51.701289 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Aug 13 07:18:51.701365 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.701365 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.701910 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 processing appconfig overrides Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 processing appconfig overrides Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 processing appconfig overrides Aug 13 07:18:51.703482 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO Proxy environment variables: Aug 13 07:18:51.705823 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.705823 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 07:18:51.705823 amazon-ssm-agent[2165]: 2025/08/13 07:18:51 processing appconfig overrides Aug 13 07:18:51.804059 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO https_proxy: Aug 13 07:18:51.902668 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO http_proxy: Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO no_proxy: Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO Checking if agent identity type OnPrem can be assumed Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO Checking if agent identity type EC2 can be assumed Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO Agent will take identity from EC2 Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [Registrar] Starting registrar module Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [EC2Identity] EC2 registration was successful. Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [CredentialRefresher] credentialRefresher has started Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 07:18:51.992727 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 07:18:52.000583 amazon-ssm-agent[2165]: 2025-08-13 07:18:51 INFO [CredentialRefresher] Next credential rotation will be in 31.3166610549 minutes Aug 13 07:18:53.006506 amazon-ssm-agent[2165]: 2025-08-13 07:18:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 07:18:53.108220 amazon-ssm-agent[2165]: 2025-08-13 07:18:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2187) started Aug 13 07:18:53.130328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:18:53.131831 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:18:53.133266 systemd[1]: Startup finished in 617ms (kernel) + 5.570s (initrd) + 5.907s (userspace) = 12.095s. Aug 13 07:18:53.134253 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:18:53.208874 amazon-ssm-agent[2165]: 2025-08-13 07:18:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 07:18:53.698330 ntpd[1950]: Listen normally on 6 eth0 [fe80::411:2bff:fe34:3ad5%2]:123 Aug 13 07:18:53.698780 ntpd[1950]: 13 Aug 07:18:53 ntpd[1950]: Listen normally on 6 eth0 [fe80::411:2bff:fe34:3ad5%2]:123 Aug 13 07:18:53.991444 kubelet[2199]: E0813 07:18:53.991317 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:18:53.993459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:18:53.993608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:18:53.994090 systemd[1]: kubelet.service: Consumed 1.067s CPU time. Aug 13 07:18:55.832683 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:18:55.837381 systemd[1]: Started sshd@0-172.31.17.50:22-147.75.109.163:54240.service - OpenSSH per-connection server daemon (147.75.109.163:54240). Aug 13 07:18:56.001017 sshd[2215]: Accepted publickey for core from 147.75.109.163 port 54240 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:56.003738 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:56.016141 systemd-logind[1957]: New session 1 of user core. Aug 13 07:18:56.017744 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:18:56.029523 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:18:56.043439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:18:56.051431 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:18:56.055738 (systemd)[2219]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:18:56.176801 systemd[2219]: Queued start job for default target default.target. Aug 13 07:18:56.183416 systemd[2219]: Created slice app.slice - User Application Slice. Aug 13 07:18:56.183457 systemd[2219]: Reached target paths.target - Paths. Aug 13 07:18:56.183477 systemd[2219]: Reached target timers.target - Timers. Aug 13 07:18:56.184921 systemd[2219]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:18:56.197517 systemd[2219]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:18:56.197669 systemd[2219]: Reached target sockets.target - Sockets. Aug 13 07:18:56.197690 systemd[2219]: Reached target basic.target - Basic System. Aug 13 07:18:56.197742 systemd[2219]: Reached target default.target - Main User Target. Aug 13 07:18:56.197780 systemd[2219]: Startup finished in 134ms. Aug 13 07:18:56.198252 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:18:56.206316 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:18:56.350422 systemd[1]: Started sshd@1-172.31.17.50:22-147.75.109.163:54250.service - OpenSSH per-connection server daemon (147.75.109.163:54250). Aug 13 07:18:56.501896 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 54250 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:56.503519 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:56.508404 systemd-logind[1957]: New session 2 of user core. Aug 13 07:18:56.515331 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:18:56.633507 sshd[2230]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:56.637378 systemd[1]: sshd@1-172.31.17.50:22-147.75.109.163:54250.service: Deactivated successfully. Aug 13 07:18:56.639206 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:18:56.639801 systemd-logind[1957]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:18:56.640715 systemd-logind[1957]: Removed session 2. Aug 13 07:18:56.666634 systemd[1]: Started sshd@2-172.31.17.50:22-147.75.109.163:54252.service - OpenSSH per-connection server daemon (147.75.109.163:54252). Aug 13 07:18:56.828266 sshd[2237]: Accepted publickey for core from 147.75.109.163 port 54252 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:56.830127 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:56.834881 systemd-logind[1957]: New session 3 of user core. Aug 13 07:18:56.839290 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:18:56.960439 sshd[2237]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:56.963786 systemd[1]: sshd@2-172.31.17.50:22-147.75.109.163:54252.service: Deactivated successfully. Aug 13 07:18:56.966255 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:18:56.967795 systemd-logind[1957]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:18:56.969407 systemd-logind[1957]: Removed session 3. Aug 13 07:18:56.997581 systemd[1]: Started sshd@3-172.31.17.50:22-147.75.109.163:54254.service - OpenSSH per-connection server daemon (147.75.109.163:54254). Aug 13 07:18:57.155620 sshd[2244]: Accepted publickey for core from 147.75.109.163 port 54254 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:57.156920 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:57.162059 systemd-logind[1957]: New session 4 of user core. Aug 13 07:18:57.169298 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:18:57.289389 sshd[2244]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:57.292911 systemd[1]: sshd@3-172.31.17.50:22-147.75.109.163:54254.service: Deactivated successfully. Aug 13 07:18:57.295028 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:18:57.296519 systemd-logind[1957]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:18:57.297932 systemd-logind[1957]: Removed session 4. Aug 13 07:18:57.323971 systemd[1]: Started sshd@4-172.31.17.50:22-147.75.109.163:47512.service - OpenSSH per-connection server daemon (147.75.109.163:47512). Aug 13 07:18:57.481458 sshd[2251]: Accepted publickey for core from 147.75.109.163 port 47512 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:57.482924 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:57.488369 systemd-logind[1957]: New session 5 of user core. Aug 13 07:18:57.494312 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:18:57.605634 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:18:57.606063 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:18:57.617619 sudo[2254]: pam_unix(sudo:session): session closed for user root Aug 13 07:18:57.640334 sshd[2251]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:57.643625 systemd[1]: sshd@4-172.31.17.50:22-147.75.109.163:47512.service: Deactivated successfully. Aug 13 07:18:57.645327 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:18:57.646606 systemd-logind[1957]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:18:57.647661 systemd-logind[1957]: Removed session 5. Aug 13 07:18:57.673704 systemd[1]: Started sshd@5-172.31.17.50:22-147.75.109.163:47520.service - OpenSSH per-connection server daemon (147.75.109.163:47520). Aug 13 07:18:58.692146 systemd-resolved[1897]: Clock change detected. Flushing caches. Aug 13 07:18:58.831980 sshd[2259]: Accepted publickey for core from 147.75.109.163 port 47520 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:58.833339 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:58.837521 systemd-logind[1957]: New session 6 of user core. Aug 13 07:18:58.848913 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:18:58.945697 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:18:58.945993 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:18:58.951271 sudo[2263]: pam_unix(sudo:session): session closed for user root Aug 13 07:18:58.956908 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:18:58.957197 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:18:58.969973 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:18:58.973050 auditctl[2266]: No rules Aug 13 07:18:58.973990 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:18:58.974195 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:18:58.979032 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:18:59.005592 augenrules[2284]: No rules Aug 13 07:18:59.007089 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:18:59.008809 sudo[2262]: pam_unix(sudo:session): session closed for user root Aug 13 07:18:59.031182 sshd[2259]: pam_unix(sshd:session): session closed for user core Aug 13 07:18:59.034728 systemd[1]: sshd@5-172.31.17.50:22-147.75.109.163:47520.service: Deactivated successfully. Aug 13 07:18:59.036755 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:18:59.038183 systemd-logind[1957]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:18:59.039291 systemd-logind[1957]: Removed session 6. Aug 13 07:18:59.063908 systemd[1]: Started sshd@6-172.31.17.50:22-147.75.109.163:47524.service - OpenSSH per-connection server daemon (147.75.109.163:47524). Aug 13 07:18:59.225274 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 47524 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:18:59.226972 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:18:59.231327 systemd-logind[1957]: New session 7 of user core. Aug 13 07:18:59.235849 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:18:59.337268 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:18:59.337557 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:18:59.802997 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:18:59.803111 (dockerd)[2311]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:19:00.418263 dockerd[2311]: time="2025-08-13T07:19:00.418197728Z" level=info msg="Starting up" Aug 13 07:19:00.554262 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3933330995-merged.mount: Deactivated successfully. Aug 13 07:19:00.604605 dockerd[2311]: time="2025-08-13T07:19:00.603990905Z" level=info msg="Loading containers: start." Aug 13 07:19:00.758841 kernel: Initializing XFRM netlink socket Aug 13 07:19:00.802438 (udev-worker)[2336]: Network interface NamePolicy= disabled on kernel command line. Aug 13 07:19:00.864952 systemd-networkd[1896]: docker0: Link UP Aug 13 07:19:00.879243 dockerd[2311]: time="2025-08-13T07:19:00.879198469Z" level=info msg="Loading containers: done." Aug 13 07:19:00.900067 dockerd[2311]: time="2025-08-13T07:19:00.900006410Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:19:00.900244 dockerd[2311]: time="2025-08-13T07:19:00.900125574Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:19:00.900277 dockerd[2311]: time="2025-08-13T07:19:00.900243521Z" level=info msg="Daemon has completed initialization" Aug 13 07:19:00.946960 dockerd[2311]: time="2025-08-13T07:19:00.946417699Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:19:00.946621 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:19:01.548584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1496751614-merged.mount: Deactivated successfully. Aug 13 07:19:02.350983 containerd[1979]: time="2025-08-13T07:19:02.350941697Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:19:02.926664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737294145.mount: Deactivated successfully. Aug 13 07:19:04.190052 containerd[1979]: time="2025-08-13T07:19:04.189992560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:04.191029 containerd[1979]: time="2025-08-13T07:19:04.190985760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:19:04.192497 containerd[1979]: time="2025-08-13T07:19:04.192203707Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:04.195019 containerd[1979]: time="2025-08-13T07:19:04.194978644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:04.195996 containerd[1979]: time="2025-08-13T07:19:04.195967067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.84498733s" Aug 13 07:19:04.196086 containerd[1979]: time="2025-08-13T07:19:04.196071987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:19:04.196749 containerd[1979]: time="2025-08-13T07:19:04.196570186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:19:05.237698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:19:05.249104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:05.529773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:05.543791 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:05.655717 kubelet[2519]: E0813 07:19:05.655033 2519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:05.660541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:05.660773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:19:06.024690 containerd[1979]: time="2025-08-13T07:19:06.024620719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:06.031657 containerd[1979]: time="2025-08-13T07:19:06.031579284Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:19:06.039646 containerd[1979]: time="2025-08-13T07:19:06.039572726Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:06.048029 containerd[1979]: time="2025-08-13T07:19:06.047951795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:06.049421 containerd[1979]: time="2025-08-13T07:19:06.049246693Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.852410053s" Aug 13 07:19:06.049421 containerd[1979]: time="2025-08-13T07:19:06.049295869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:19:06.050611 containerd[1979]: time="2025-08-13T07:19:06.050571962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:19:07.284506 containerd[1979]: time="2025-08-13T07:19:07.284437858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:07.285849 containerd[1979]: time="2025-08-13T07:19:07.285798256Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:19:07.286780 containerd[1979]: time="2025-08-13T07:19:07.286745203Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:07.289599 containerd[1979]: time="2025-08-13T07:19:07.289543286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:07.291697 containerd[1979]: time="2025-08-13T07:19:07.290765386Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.240143756s" Aug 13 07:19:07.291697 containerd[1979]: time="2025-08-13T07:19:07.290808871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:19:07.291835 containerd[1979]: time="2025-08-13T07:19:07.291779724Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:19:08.285920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035729734.mount: Deactivated successfully. Aug 13 07:19:08.873688 containerd[1979]: time="2025-08-13T07:19:08.873611609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:08.875575 containerd[1979]: time="2025-08-13T07:19:08.875527501Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:19:08.877893 containerd[1979]: time="2025-08-13T07:19:08.877834314Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:08.881152 containerd[1979]: time="2025-08-13T07:19:08.881111641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:08.881803 containerd[1979]: time="2025-08-13T07:19:08.881772460Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.589963182s" Aug 13 07:19:08.881871 containerd[1979]: time="2025-08-13T07:19:08.881804063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:19:08.882537 containerd[1979]: time="2025-08-13T07:19:08.882508138Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:19:09.401367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082692809.mount: Deactivated successfully. Aug 13 07:19:10.510652 containerd[1979]: time="2025-08-13T07:19:10.510594752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:10.512926 containerd[1979]: time="2025-08-13T07:19:10.512847999Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:19:10.515229 containerd[1979]: time="2025-08-13T07:19:10.515163454Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:10.518973 containerd[1979]: time="2025-08-13T07:19:10.518912686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:10.520546 containerd[1979]: time="2025-08-13T07:19:10.520354362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.637809011s" Aug 13 07:19:10.520546 containerd[1979]: time="2025-08-13T07:19:10.520399960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:19:10.521440 containerd[1979]: time="2025-08-13T07:19:10.521408549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:19:11.062421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962774003.mount: Deactivated successfully. Aug 13 07:19:11.074933 containerd[1979]: time="2025-08-13T07:19:11.074883272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:11.076966 containerd[1979]: time="2025-08-13T07:19:11.076803941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:19:11.081288 containerd[1979]: time="2025-08-13T07:19:11.078937041Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:11.084736 containerd[1979]: time="2025-08-13T07:19:11.084658218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:11.085713 containerd[1979]: time="2025-08-13T07:19:11.085658338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.210577ms" Aug 13 07:19:11.085822 containerd[1979]: time="2025-08-13T07:19:11.085719740Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:19:11.086297 containerd[1979]: time="2025-08-13T07:19:11.086249894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:19:11.596888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093687623.mount: Deactivated successfully. Aug 13 07:19:13.666485 containerd[1979]: time="2025-08-13T07:19:13.666416462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:13.671548 containerd[1979]: time="2025-08-13T07:19:13.671163382Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:19:13.674843 containerd[1979]: time="2025-08-13T07:19:13.674790853Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:13.682902 containerd[1979]: time="2025-08-13T07:19:13.682859143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:13.684129 containerd[1979]: time="2025-08-13T07:19:13.684073400Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.597788557s" Aug 13 07:19:13.684129 containerd[1979]: time="2025-08-13T07:19:13.684116252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:19:15.890646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:19:15.900722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:16.238863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:16.252112 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:19:16.322542 kubelet[2677]: E0813 07:19:16.322461 2677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:19:16.326379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:19:16.326577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:19:17.049182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:17.062081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:17.096012 systemd[1]: Reloading requested from client PID 2692 ('systemctl') (unit session-7.scope)... Aug 13 07:19:17.096030 systemd[1]: Reloading... Aug 13 07:19:17.223697 zram_generator::config[2736]: No configuration found. Aug 13 07:19:17.386778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:19:17.473473 systemd[1]: Reloading finished in 376 ms. Aug 13 07:19:17.530776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:19:17.530891 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:19:17.531416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:17.534230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:17.755068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:17.766416 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:19:17.823624 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:17.823624 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:19:17.823624 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:17.824116 kubelet[2797]: I0813 07:19:17.823722 2797 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:19:18.384699 kubelet[2797]: I0813 07:19:18.384641 2797 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:19:18.384699 kubelet[2797]: I0813 07:19:18.384693 2797 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:19:18.385089 kubelet[2797]: I0813 07:19:18.385064 2797 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:19:18.433936 kubelet[2797]: I0813 07:19:18.433606 2797 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:19:18.433936 kubelet[2797]: E0813 07:19:18.433902 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:18.449419 kubelet[2797]: E0813 07:19:18.449378 2797 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:19:18.449419 kubelet[2797]: I0813 07:19:18.449409 2797 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:19:18.453690 kubelet[2797]: I0813 07:19:18.453628 2797 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:19:18.457657 kubelet[2797]: I0813 07:19:18.457591 2797 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:19:18.457850 kubelet[2797]: I0813 07:19:18.457800 2797 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:19:18.458024 kubelet[2797]: I0813 07:19:18.457845 2797 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:19:18.458112 kubelet[2797]: I0813 07:19:18.458032 2797 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:19:18.458112 kubelet[2797]: I0813 07:19:18.458042 2797 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:19:18.458164 kubelet[2797]: I0813 07:19:18.458153 2797 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:18.462972 kubelet[2797]: I0813 07:19:18.462931 2797 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:19:18.462972 kubelet[2797]: I0813 07:19:18.462971 2797 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:19:18.463135 kubelet[2797]: I0813 07:19:18.463015 2797 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:19:18.463135 kubelet[2797]: I0813 07:19:18.463033 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:19:18.472755 kubelet[2797]: I0813 07:19:18.472612 2797 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:19:18.475177 kubelet[2797]: W0813 07:19:18.475109 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:18.475396 kubelet[2797]: E0813 07:19:18.475192 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:18.475396 kubelet[2797]: W0813 07:19:18.475348 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-50&limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:18.475396 kubelet[2797]: E0813 07:19:18.475393 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-50&limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:18.477883 kubelet[2797]: I0813 07:19:18.476976 2797 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:19:18.477883 kubelet[2797]: W0813 07:19:18.477034 2797 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:19:18.477883 kubelet[2797]: I0813 07:19:18.477558 2797 server.go:1274] "Started kubelet" Aug 13 07:19:18.479636 kubelet[2797]: I0813 07:19:18.478777 2797 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:19:18.480540 kubelet[2797]: I0813 07:19:18.479924 2797 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:19:18.482708 kubelet[2797]: I0813 07:19:18.482169 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:19:18.482708 kubelet[2797]: I0813 07:19:18.482432 2797 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:19:18.486685 kubelet[2797]: I0813 07:19:18.485554 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:19:18.486685 kubelet[2797]: E0813 07:19:18.482626 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.50:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-50.185b4276f527c244 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-50,UID:ip-172-31-17-50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-50,},FirstTimestamp:2025-08-13 07:19:18.47753786 +0000 UTC m=+0.706331677,LastTimestamp:2025-08-13 07:19:18.47753786 +0000 UTC m=+0.706331677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-50,}" Aug 13 07:19:18.486685 kubelet[2797]: I0813 07:19:18.485901 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:19:18.491717 kubelet[2797]: I0813 07:19:18.491696 2797 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:19:18.497698 kubelet[2797]: E0813 07:19:18.497650 2797 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-50\" not found" Aug 13 07:19:18.502395 kubelet[2797]: I0813 07:19:18.498120 2797 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:19:18.502395 kubelet[2797]: I0813 07:19:18.498163 2797 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:19:18.502395 kubelet[2797]: W0813 07:19:18.498445 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:18.502395 kubelet[2797]: E0813 07:19:18.498485 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:18.502395 kubelet[2797]: E0813 07:19:18.498534 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-50?timeout=10s\": dial tcp 172.31.17.50:6443: connect: connection refused" interval="200ms" Aug 13 07:19:18.502395 kubelet[2797]: I0813 07:19:18.501294 2797 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:19:18.502395 kubelet[2797]: I0813 07:19:18.501312 2797 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:19:18.502395 kubelet[2797]: I0813 07:19:18.501385 2797 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:19:18.530402 kubelet[2797]: I0813 07:19:18.529235 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:19:18.532169 kubelet[2797]: I0813 07:19:18.532131 2797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:19:18.532169 kubelet[2797]: I0813 07:19:18.532159 2797 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:19:18.532169 kubelet[2797]: I0813 07:19:18.532179 2797 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:19:18.532323 kubelet[2797]: E0813 07:19:18.532218 2797 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:19:18.533931 kubelet[2797]: W0813 07:19:18.533898 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:18.534044 kubelet[2797]: E0813 07:19:18.533939 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:18.539989 kubelet[2797]: I0813 07:19:18.539965 2797 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:19:18.539989 kubelet[2797]: I0813 07:19:18.539981 2797 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:19:18.540124 kubelet[2797]: I0813 07:19:18.540056 2797 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:18.544557 kubelet[2797]: I0813 07:19:18.544525 2797 policy_none.go:49] "None policy: Start" Aug 13 07:19:18.545253 kubelet[2797]: I0813 07:19:18.545236 2797 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:19:18.545506 kubelet[2797]: I0813 07:19:18.545391 2797 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:19:18.558272 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:19:18.567429 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:19:18.571789 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:19:18.586216 kubelet[2797]: I0813 07:19:18.584766 2797 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:19:18.586216 kubelet[2797]: I0813 07:19:18.585045 2797 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:19:18.586216 kubelet[2797]: I0813 07:19:18.585062 2797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:19:18.586216 kubelet[2797]: I0813 07:19:18.585328 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:19:18.587418 kubelet[2797]: E0813 07:19:18.587389 2797 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-50\" not found" Aug 13 07:19:18.642608 systemd[1]: Created slice kubepods-burstable-poda63987787fdea41e8a5e2ab9cedb9bd5.slice - libcontainer container kubepods-burstable-poda63987787fdea41e8a5e2ab9cedb9bd5.slice. Aug 13 07:19:18.653831 systemd[1]: Created slice kubepods-burstable-podfc46a971714d69fc52a6ad381b0194f5.slice - libcontainer container kubepods-burstable-podfc46a971714d69fc52a6ad381b0194f5.slice. Aug 13 07:19:18.668173 systemd[1]: Created slice kubepods-burstable-pod5783f7fd26cf22a3161dd91370866977.slice - libcontainer container kubepods-burstable-pod5783f7fd26cf22a3161dd91370866977.slice. Aug 13 07:19:18.687367 kubelet[2797]: I0813 07:19:18.687294 2797 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-50" Aug 13 07:19:18.687635 kubelet[2797]: E0813 07:19:18.687611 2797 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.50:6443/api/v1/nodes\": dial tcp 172.31.17.50:6443: connect: connection refused" node="ip-172-31-17-50" Aug 13 07:19:18.699337 kubelet[2797]: E0813 07:19:18.699173 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-50?timeout=10s\": dial tcp 172.31.17.50:6443: connect: connection refused" interval="400ms" Aug 13 07:19:18.799879 kubelet[2797]: I0813 07:19:18.799820 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:18.799879 kubelet[2797]: I0813 07:19:18.799867 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:18.799879 kubelet[2797]: I0813 07:19:18.799885 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5783f7fd26cf22a3161dd91370866977-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-50\" (UID: \"5783f7fd26cf22a3161dd91370866977\") " pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:18.800165 kubelet[2797]: I0813 07:19:18.799901 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5783f7fd26cf22a3161dd91370866977-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-50\" (UID: \"5783f7fd26cf22a3161dd91370866977\") " pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:18.800165 kubelet[2797]: I0813 07:19:18.799917 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:18.800165 kubelet[2797]: I0813 07:19:18.799934 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:18.800165 kubelet[2797]: I0813 07:19:18.799950 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:18.800165 kubelet[2797]: I0813 07:19:18.799974 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc46a971714d69fc52a6ad381b0194f5-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-50\" (UID: \"fc46a971714d69fc52a6ad381b0194f5\") " pod="kube-system/kube-scheduler-ip-172-31-17-50" Aug 13 07:19:18.800294 kubelet[2797]: I0813 07:19:18.799990 2797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5783f7fd26cf22a3161dd91370866977-ca-certs\") pod \"kube-apiserver-ip-172-31-17-50\" (UID: \"5783f7fd26cf22a3161dd91370866977\") " pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:18.890333 kubelet[2797]: I0813 07:19:18.890010 2797 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-50" Aug 13 07:19:18.890333 kubelet[2797]: E0813 07:19:18.890304 2797 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.50:6443/api/v1/nodes\": dial tcp 172.31.17.50:6443: connect: connection refused" node="ip-172-31-17-50" Aug 13 07:19:18.952266 containerd[1979]: time="2025-08-13T07:19:18.952140836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-50,Uid:a63987787fdea41e8a5e2ab9cedb9bd5,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:18.974206 containerd[1979]: time="2025-08-13T07:19:18.973836736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-50,Uid:5783f7fd26cf22a3161dd91370866977,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:18.974206 containerd[1979]: time="2025-08-13T07:19:18.973838533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-50,Uid:fc46a971714d69fc52a6ad381b0194f5,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:19.100578 kubelet[2797]: E0813 07:19:19.100530 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-50?timeout=10s\": dial tcp 172.31.17.50:6443: connect: connection refused" interval="800ms" Aug 13 07:19:19.291412 kubelet[2797]: W0813 07:19:19.290714 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:19.291412 kubelet[2797]: E0813 07:19:19.290808 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:19.292177 kubelet[2797]: I0813 07:19:19.292149 2797 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-50" Aug 13 07:19:19.292455 kubelet[2797]: E0813 07:19:19.292424 2797 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.50:6443/api/v1/nodes\": dial tcp 172.31.17.50:6443: connect: connection refused" node="ip-172-31-17-50" Aug 13 07:19:19.301592 kubelet[2797]: W0813 07:19:19.301512 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-50&limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:19.301592 kubelet[2797]: E0813 07:19:19.301582 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-50&limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:19.440383 kubelet[2797]: W0813 07:19:19.440341 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:19.440383 kubelet[2797]: E0813 07:19:19.440386 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:19.468564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476270662.mount: Deactivated successfully. Aug 13 07:19:19.485577 containerd[1979]: time="2025-08-13T07:19:19.485431203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:19.487686 containerd[1979]: time="2025-08-13T07:19:19.487615597Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:19.492997 containerd[1979]: time="2025-08-13T07:19:19.492917721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:19:19.495370 containerd[1979]: time="2025-08-13T07:19:19.495283495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:19:19.497393 containerd[1979]: time="2025-08-13T07:19:19.497353370Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:19.499916 containerd[1979]: time="2025-08-13T07:19:19.499864809Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:19.501689 containerd[1979]: time="2025-08-13T07:19:19.501565895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:19:19.505369 containerd[1979]: time="2025-08-13T07:19:19.505299718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:19:19.506705 containerd[1979]: time="2025-08-13T07:19:19.505824373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.892979ms" Aug 13 07:19:19.506866 containerd[1979]: time="2025-08-13T07:19:19.506835741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.614648ms" Aug 13 07:19:19.510873 containerd[1979]: time="2025-08-13T07:19:19.510834273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.843236ms" Aug 13 07:19:19.738694 containerd[1979]: time="2025-08-13T07:19:19.737413552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:19.738694 containerd[1979]: time="2025-08-13T07:19:19.737484999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:19.738694 containerd[1979]: time="2025-08-13T07:19:19.737523359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.738694 containerd[1979]: time="2025-08-13T07:19:19.737631359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.749770 containerd[1979]: time="2025-08-13T07:19:19.749605777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:19.749917 containerd[1979]: time="2025-08-13T07:19:19.749802032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:19.749917 containerd[1979]: time="2025-08-13T07:19:19.749821470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.750143 containerd[1979]: time="2025-08-13T07:19:19.750062033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.753448 containerd[1979]: time="2025-08-13T07:19:19.753056369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:19.753448 containerd[1979]: time="2025-08-13T07:19:19.753124384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:19.753448 containerd[1979]: time="2025-08-13T07:19:19.753166533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.753448 containerd[1979]: time="2025-08-13T07:19:19.753315136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:19.761174 kubelet[2797]: W0813 07:19:19.759847 2797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.50:6443: connect: connection refused Aug 13 07:19:19.761449 kubelet[2797]: E0813 07:19:19.761413 2797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:19.782916 systemd[1]: Started cri-containerd-5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949.scope - libcontainer container 5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949. Aug 13 07:19:19.800866 systemd[1]: Started cri-containerd-44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497.scope - libcontainer container 44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497. Aug 13 07:19:19.806348 systemd[1]: Started cri-containerd-b93bfdba0cbfbf5ba95d1f60da976aee80bbff27dd8c502de275c5ab1f925567.scope - libcontainer container b93bfdba0cbfbf5ba95d1f60da976aee80bbff27dd8c502de275c5ab1f925567. Aug 13 07:19:19.892290 containerd[1979]: time="2025-08-13T07:19:19.892147115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-50,Uid:5783f7fd26cf22a3161dd91370866977,Namespace:kube-system,Attempt:0,} returns sandbox id \"b93bfdba0cbfbf5ba95d1f60da976aee80bbff27dd8c502de275c5ab1f925567\"" Aug 13 07:19:19.895513 containerd[1979]: time="2025-08-13T07:19:19.895380281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-50,Uid:a63987787fdea41e8a5e2ab9cedb9bd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949\"" Aug 13 07:19:19.901297 kubelet[2797]: E0813 07:19:19.901085 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-50?timeout=10s\": dial tcp 172.31.17.50:6443: connect: connection refused" interval="1.6s" Aug 13 07:19:19.910742 containerd[1979]: time="2025-08-13T07:19:19.910702195Z" level=info msg="CreateContainer within sandbox \"5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:19:19.912501 containerd[1979]: time="2025-08-13T07:19:19.912456315Z" level=info msg="CreateContainer within sandbox \"b93bfdba0cbfbf5ba95d1f60da976aee80bbff27dd8c502de275c5ab1f925567\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:19:19.918488 containerd[1979]: time="2025-08-13T07:19:19.918446387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-50,Uid:fc46a971714d69fc52a6ad381b0194f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497\"" Aug 13 07:19:19.922084 containerd[1979]: time="2025-08-13T07:19:19.922047612Z" level=info msg="CreateContainer within sandbox \"44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:19:19.960943 containerd[1979]: time="2025-08-13T07:19:19.960892712Z" level=info msg="CreateContainer within sandbox \"5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4\"" Aug 13 07:19:19.961722 containerd[1979]: time="2025-08-13T07:19:19.961586293Z" level=info msg="StartContainer for \"33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4\"" Aug 13 07:19:19.965248 containerd[1979]: time="2025-08-13T07:19:19.965198173Z" level=info msg="CreateContainer within sandbox \"b93bfdba0cbfbf5ba95d1f60da976aee80bbff27dd8c502de275c5ab1f925567\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"61f79a7d06b31e8d0036233c7b77c1de48276cde2aa02ac8b0b8e4f9fe34d8d2\"" Aug 13 07:19:19.968215 containerd[1979]: time="2025-08-13T07:19:19.968025851Z" level=info msg="StartContainer for \"61f79a7d06b31e8d0036233c7b77c1de48276cde2aa02ac8b0b8e4f9fe34d8d2\"" Aug 13 07:19:19.970494 containerd[1979]: time="2025-08-13T07:19:19.970458140Z" level=info msg="CreateContainer within sandbox \"44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779\"" Aug 13 07:19:19.972729 containerd[1979]: time="2025-08-13T07:19:19.971347312Z" level=info msg="StartContainer for \"56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779\"" Aug 13 07:19:20.028883 systemd[1]: Started cri-containerd-33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4.scope - libcontainer container 33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4. Aug 13 07:19:20.032296 systemd[1]: Started cri-containerd-61f79a7d06b31e8d0036233c7b77c1de48276cde2aa02ac8b0b8e4f9fe34d8d2.scope - libcontainer container 61f79a7d06b31e8d0036233c7b77c1de48276cde2aa02ac8b0b8e4f9fe34d8d2. Aug 13 07:19:20.041959 systemd[1]: Started cri-containerd-56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779.scope - libcontainer container 56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779. Aug 13 07:19:20.097146 kubelet[2797]: I0813 07:19:20.096066 2797 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-50" Aug 13 07:19:20.097836 kubelet[2797]: E0813 07:19:20.097704 2797 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.50:6443/api/v1/nodes\": dial tcp 172.31.17.50:6443: connect: connection refused" node="ip-172-31-17-50" Aug 13 07:19:20.131125 containerd[1979]: time="2025-08-13T07:19:20.131082844Z" level=info msg="StartContainer for \"61f79a7d06b31e8d0036233c7b77c1de48276cde2aa02ac8b0b8e4f9fe34d8d2\" returns successfully" Aug 13 07:19:20.132300 containerd[1979]: time="2025-08-13T07:19:20.131773798Z" level=info msg="StartContainer for \"33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4\" returns successfully" Aug 13 07:19:20.156459 containerd[1979]: time="2025-08-13T07:19:20.156403920Z" level=info msg="StartContainer for \"56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779\" returns successfully" Aug 13 07:19:20.559438 kubelet[2797]: E0813 07:19:20.559393 2797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.50:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:19:21.699645 kubelet[2797]: I0813 07:19:21.699613 2797 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-50" Aug 13 07:19:22.236000 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 07:19:22.717091 kubelet[2797]: E0813 07:19:22.717033 2797 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-50\" not found" node="ip-172-31-17-50" Aug 13 07:19:22.822399 kubelet[2797]: I0813 07:19:22.822357 2797 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-50" Aug 13 07:19:22.943751 kubelet[2797]: E0813 07:19:22.943717 2797 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-50\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-50" Aug 13 07:19:23.476176 kubelet[2797]: I0813 07:19:23.476134 2797 apiserver.go:52] "Watching apiserver" Aug 13 07:19:23.498770 kubelet[2797]: I0813 07:19:23.498706 2797 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:19:24.734723 systemd[1]: Reloading requested from client PID 3077 ('systemctl') (unit session-7.scope)... Aug 13 07:19:24.734747 systemd[1]: Reloading... Aug 13 07:19:24.869718 zram_generator::config[3120]: No configuration found. Aug 13 07:19:24.999551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:19:25.106156 systemd[1]: Reloading finished in 370 ms. Aug 13 07:19:25.150036 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:25.165443 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:19:25.165639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:25.165700 systemd[1]: kubelet.service: Consumed 1.123s CPU time, 128.5M memory peak, 0B memory swap peak. Aug 13 07:19:25.173108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:19:25.450309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:19:25.463051 (kubelet)[3177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:19:25.542874 kubelet[3177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:25.542874 kubelet[3177]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:19:25.542874 kubelet[3177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:19:25.543384 kubelet[3177]: I0813 07:19:25.542933 3177 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:19:25.553702 kubelet[3177]: I0813 07:19:25.552259 3177 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:19:25.553702 kubelet[3177]: I0813 07:19:25.552286 3177 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:19:25.553702 kubelet[3177]: I0813 07:19:25.552515 3177 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:19:25.554072 kubelet[3177]: I0813 07:19:25.554053 3177 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:19:25.560474 kubelet[3177]: I0813 07:19:25.560440 3177 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:19:25.566722 kubelet[3177]: E0813 07:19:25.565733 3177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:19:25.566722 kubelet[3177]: I0813 07:19:25.565766 3177 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:19:25.567802 kubelet[3177]: I0813 07:19:25.567766 3177 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:19:25.567905 kubelet[3177]: I0813 07:19:25.567892 3177 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:19:25.568053 kubelet[3177]: I0813 07:19:25.568025 3177 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:19:25.568237 kubelet[3177]: I0813 07:19:25.568052 3177 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:19:25.572312 kubelet[3177]: I0813 07:19:25.572272 3177 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:19:25.572312 kubelet[3177]: I0813 07:19:25.572302 3177 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:19:25.572440 kubelet[3177]: I0813 07:19:25.572341 3177 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:25.576108 kubelet[3177]: I0813 07:19:25.576066 3177 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:19:25.576108 kubelet[3177]: I0813 07:19:25.576092 3177 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:19:25.580885 kubelet[3177]: I0813 07:19:25.579378 3177 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:19:25.580885 kubelet[3177]: I0813 07:19:25.579402 3177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:19:25.582175 kubelet[3177]: I0813 07:19:25.582117 3177 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:19:25.582782 kubelet[3177]: I0813 07:19:25.582759 3177 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:19:25.589785 kubelet[3177]: I0813 07:19:25.589755 3177 server.go:1274] "Started kubelet" Aug 13 07:19:25.597431 kubelet[3177]: I0813 07:19:25.596591 3177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:19:25.598792 kubelet[3177]: I0813 07:19:25.598106 3177 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:19:25.600430 kubelet[3177]: I0813 07:19:25.599759 3177 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:19:25.603683 kubelet[3177]: I0813 07:19:25.603624 3177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:19:25.605711 kubelet[3177]: I0813 07:19:25.603987 3177 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:19:25.605711 kubelet[3177]: I0813 07:19:25.604216 3177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:19:25.611588 kubelet[3177]: I0813 07:19:25.611543 3177 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:19:25.613320 kubelet[3177]: I0813 07:19:25.613285 3177 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:19:25.616398 kubelet[3177]: E0813 07:19:25.616370 3177 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:19:25.617343 kubelet[3177]: I0813 07:19:25.617323 3177 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:19:25.620404 kubelet[3177]: I0813 07:19:25.620381 3177 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:19:25.620824 kubelet[3177]: I0813 07:19:25.620809 3177 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:19:25.621218 kubelet[3177]: I0813 07:19:25.621194 3177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:19:25.627066 kubelet[3177]: I0813 07:19:25.627026 3177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:19:25.630720 kubelet[3177]: I0813 07:19:25.630680 3177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:19:25.630720 kubelet[3177]: I0813 07:19:25.630726 3177 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:19:25.630943 kubelet[3177]: I0813 07:19:25.630752 3177 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:19:25.630943 kubelet[3177]: E0813 07:19:25.630806 3177 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:19:25.676060 kubelet[3177]: I0813 07:19:25.676032 3177 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:19:25.676060 kubelet[3177]: I0813 07:19:25.676055 3177 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:19:25.676272 kubelet[3177]: I0813 07:19:25.676079 3177 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:19:25.676272 kubelet[3177]: I0813 07:19:25.676266 3177 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:19:25.676376 kubelet[3177]: I0813 07:19:25.676280 3177 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:19:25.676376 kubelet[3177]: I0813 07:19:25.676306 3177 policy_none.go:49] "None policy: Start" Aug 13 07:19:25.677876 kubelet[3177]: I0813 07:19:25.677833 3177 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:19:25.678240 kubelet[3177]: I0813 07:19:25.678224 3177 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:19:25.678496 kubelet[3177]: I0813 07:19:25.678487 3177 state_mem.go:75] "Updated machine memory state" Aug 13 07:19:25.684952 kubelet[3177]: I0813 07:19:25.684932 3177 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:19:25.685503 kubelet[3177]: I0813 07:19:25.685489 3177 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:19:25.685757 kubelet[3177]: I0813 07:19:25.685715 3177 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:19:25.686799 kubelet[3177]: I0813 07:19:25.686784 3177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:19:25.796092 kubelet[3177]: I0813 07:19:25.796007 3177 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-50" Aug 13 07:19:25.806580 kubelet[3177]: I0813 07:19:25.806543 3177 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-50" Aug 13 07:19:25.807780 kubelet[3177]: I0813 07:19:25.807544 3177 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-50" Aug 13 07:19:25.820560 kubelet[3177]: I0813 07:19:25.820528 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5783f7fd26cf22a3161dd91370866977-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-50\" (UID: \"5783f7fd26cf22a3161dd91370866977\") " pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:25.820560 kubelet[3177]: I0813 07:19:25.820560 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:25.821500 kubelet[3177]: I0813 07:19:25.820578 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:25.821500 kubelet[3177]: I0813 07:19:25.820595 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:25.821500 kubelet[3177]: I0813 07:19:25.820879 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:25.821500 kubelet[3177]: I0813 07:19:25.820910 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5783f7fd26cf22a3161dd91370866977-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-50\" (UID: \"5783f7fd26cf22a3161dd91370866977\") " pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:25.821500 kubelet[3177]: I0813 07:19:25.820926 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a63987787fdea41e8a5e2ab9cedb9bd5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-50\" (UID: \"a63987787fdea41e8a5e2ab9cedb9bd5\") " pod="kube-system/kube-controller-manager-ip-172-31-17-50" Aug 13 07:19:25.821758 kubelet[3177]: I0813 07:19:25.820945 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc46a971714d69fc52a6ad381b0194f5-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-50\" (UID: \"fc46a971714d69fc52a6ad381b0194f5\") " pod="kube-system/kube-scheduler-ip-172-31-17-50" Aug 13 07:19:25.821758 kubelet[3177]: I0813 07:19:25.820976 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5783f7fd26cf22a3161dd91370866977-ca-certs\") pod \"kube-apiserver-ip-172-31-17-50\" (UID: \"5783f7fd26cf22a3161dd91370866977\") " pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:26.581954 kubelet[3177]: I0813 07:19:26.581890 3177 apiserver.go:52] "Watching apiserver" Aug 13 07:19:26.615702 kubelet[3177]: I0813 07:19:26.613940 3177 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:19:26.665496 kubelet[3177]: E0813 07:19:26.665450 3177 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-50\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-50" Aug 13 07:19:26.696456 kubelet[3177]: I0813 07:19:26.695452 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-50" podStartSLOduration=1.695410275 podStartE2EDuration="1.695410275s" podCreationTimestamp="2025-08-13 07:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:26.695106322 +0000 UTC m=+1.223626077" watchObservedRunningTime="2025-08-13 07:19:26.695410275 +0000 UTC m=+1.223930022" Aug 13 07:19:26.712142 kubelet[3177]: I0813 07:19:26.711049 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-50" podStartSLOduration=1.7110266699999999 podStartE2EDuration="1.71102667s" podCreationTimestamp="2025-08-13 07:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:26.71100743 +0000 UTC m=+1.239527186" watchObservedRunningTime="2025-08-13 07:19:26.71102667 +0000 UTC m=+1.239546418" Aug 13 07:19:26.732911 kubelet[3177]: I0813 07:19:26.732847 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-50" podStartSLOduration=1.73282326 podStartE2EDuration="1.73282326s" podCreationTimestamp="2025-08-13 07:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:26.721309974 +0000 UTC m=+1.249829730" watchObservedRunningTime="2025-08-13 07:19:26.73282326 +0000 UTC m=+1.261343016" Aug 13 07:19:29.324152 kubelet[3177]: I0813 07:19:29.324075 3177 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:19:29.325037 kubelet[3177]: I0813 07:19:29.324551 3177 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:19:29.325076 containerd[1979]: time="2025-08-13T07:19:29.324376312Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:19:30.367056 systemd[1]: Created slice kubepods-besteffort-pod8ee95ba9_02ed_4134_bd22_3a79dca13f23.slice - libcontainer container kubepods-besteffort-pod8ee95ba9_02ed_4134_bd22_3a79dca13f23.slice. Aug 13 07:19:30.449847 systemd[1]: Created slice kubepods-besteffort-pod554b2ead_6c9a_4ab3_a990_38755a393230.slice - libcontainer container kubepods-besteffort-pod554b2ead_6c9a_4ab3_a990_38755a393230.slice. Aug 13 07:19:30.454721 kubelet[3177]: I0813 07:19:30.452364 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee95ba9-02ed-4134-bd22-3a79dca13f23-xtables-lock\") pod \"kube-proxy-v7ws9\" (UID: \"8ee95ba9-02ed-4134-bd22-3a79dca13f23\") " pod="kube-system/kube-proxy-v7ws9" Aug 13 07:19:30.454721 kubelet[3177]: I0813 07:19:30.452411 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ee95ba9-02ed-4134-bd22-3a79dca13f23-kube-proxy\") pod \"kube-proxy-v7ws9\" (UID: \"8ee95ba9-02ed-4134-bd22-3a79dca13f23\") " pod="kube-system/kube-proxy-v7ws9" Aug 13 07:19:30.454721 kubelet[3177]: I0813 07:19:30.452435 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee95ba9-02ed-4134-bd22-3a79dca13f23-lib-modules\") pod \"kube-proxy-v7ws9\" (UID: \"8ee95ba9-02ed-4134-bd22-3a79dca13f23\") " pod="kube-system/kube-proxy-v7ws9" Aug 13 07:19:30.454721 kubelet[3177]: I0813 07:19:30.452462 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddjc\" (UniqueName: \"kubernetes.io/projected/8ee95ba9-02ed-4134-bd22-3a79dca13f23-kube-api-access-7ddjc\") pod \"kube-proxy-v7ws9\" (UID: \"8ee95ba9-02ed-4134-bd22-3a79dca13f23\") " pod="kube-system/kube-proxy-v7ws9" Aug 13 07:19:30.553182 kubelet[3177]: I0813 07:19:30.552805 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/554b2ead-6c9a-4ab3-a990-38755a393230-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-b4vml\" (UID: \"554b2ead-6c9a-4ab3-a990-38755a393230\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-b4vml" Aug 13 07:19:30.553182 kubelet[3177]: I0813 07:19:30.552843 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsf4\" (UniqueName: \"kubernetes.io/projected/554b2ead-6c9a-4ab3-a990-38755a393230-kube-api-access-crsf4\") pod \"tigera-operator-5bf8dfcb4-b4vml\" (UID: \"554b2ead-6c9a-4ab3-a990-38755a393230\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-b4vml" Aug 13 07:19:30.677971 containerd[1979]: time="2025-08-13T07:19:30.677920210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7ws9,Uid:8ee95ba9-02ed-4134-bd22-3a79dca13f23,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:30.711612 containerd[1979]: time="2025-08-13T07:19:30.711333760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:30.711612 containerd[1979]: time="2025-08-13T07:19:30.711402036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:30.711612 containerd[1979]: time="2025-08-13T07:19:30.711412914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:30.711612 containerd[1979]: time="2025-08-13T07:19:30.711514857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:30.731912 systemd[1]: Started cri-containerd-9f8a4f21f9f6ca7317a526fb00d54283e4adc856fbe34d5dbe4a218418c2f52e.scope - libcontainer container 9f8a4f21f9f6ca7317a526fb00d54283e4adc856fbe34d5dbe4a218418c2f52e. Aug 13 07:19:30.759240 containerd[1979]: time="2025-08-13T07:19:30.758691614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7ws9,Uid:8ee95ba9-02ed-4134-bd22-3a79dca13f23,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f8a4f21f9f6ca7317a526fb00d54283e4adc856fbe34d5dbe4a218418c2f52e\"" Aug 13 07:19:30.759240 containerd[1979]: time="2025-08-13T07:19:30.758762256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-b4vml,Uid:554b2ead-6c9a-4ab3-a990-38755a393230,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:19:30.762848 containerd[1979]: time="2025-08-13T07:19:30.762810646Z" level=info msg="CreateContainer within sandbox \"9f8a4f21f9f6ca7317a526fb00d54283e4adc856fbe34d5dbe4a218418c2f52e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:19:30.799930 containerd[1979]: time="2025-08-13T07:19:30.799827324Z" level=info msg="CreateContainer within sandbox \"9f8a4f21f9f6ca7317a526fb00d54283e4adc856fbe34d5dbe4a218418c2f52e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d02ce344009aaa2ce2515a44b09fd777b5d22adfe40687d1a1a74c28c4314437\"" Aug 13 07:19:30.802666 containerd[1979]: time="2025-08-13T07:19:30.801527507Z" level=info msg="StartContainer for \"d02ce344009aaa2ce2515a44b09fd777b5d22adfe40687d1a1a74c28c4314437\"" Aug 13 07:19:30.810690 containerd[1979]: time="2025-08-13T07:19:30.809697774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:30.810903 containerd[1979]: time="2025-08-13T07:19:30.810369674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:30.810903 containerd[1979]: time="2025-08-13T07:19:30.810395856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:30.810903 containerd[1979]: time="2025-08-13T07:19:30.810491919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:30.841942 systemd[1]: Started cri-containerd-2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0.scope - libcontainer container 2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0. Aug 13 07:19:30.861860 systemd[1]: Started cri-containerd-d02ce344009aaa2ce2515a44b09fd777b5d22adfe40687d1a1a74c28c4314437.scope - libcontainer container d02ce344009aaa2ce2515a44b09fd777b5d22adfe40687d1a1a74c28c4314437. Aug 13 07:19:30.910787 containerd[1979]: time="2025-08-13T07:19:30.910614227Z" level=info msg="StartContainer for \"d02ce344009aaa2ce2515a44b09fd777b5d22adfe40687d1a1a74c28c4314437\" returns successfully" Aug 13 07:19:30.917976 containerd[1979]: time="2025-08-13T07:19:30.917859668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-b4vml,Uid:554b2ead-6c9a-4ab3-a990-38755a393230,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0\"" Aug 13 07:19:30.923289 containerd[1979]: time="2025-08-13T07:19:30.922848037Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:19:32.452577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914527883.mount: Deactivated successfully. Aug 13 07:19:33.335638 containerd[1979]: time="2025-08-13T07:19:33.335583151Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:33.336768 containerd[1979]: time="2025-08-13T07:19:33.336711022Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:19:33.338483 containerd[1979]: time="2025-08-13T07:19:33.338395142Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:33.341355 containerd[1979]: time="2025-08-13T07:19:33.341316756Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:33.342445 containerd[1979]: time="2025-08-13T07:19:33.342088463Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.419199381s" Aug 13 07:19:33.342445 containerd[1979]: time="2025-08-13T07:19:33.342133057Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:19:33.346061 containerd[1979]: time="2025-08-13T07:19:33.346006155Z" level=info msg="CreateContainer within sandbox \"2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:19:33.359271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113145270.mount: Deactivated successfully. Aug 13 07:19:33.370630 containerd[1979]: time="2025-08-13T07:19:33.370506581Z" level=info msg="CreateContainer within sandbox \"2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a\"" Aug 13 07:19:33.372397 containerd[1979]: time="2025-08-13T07:19:33.372366561Z" level=info msg="StartContainer for \"ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a\"" Aug 13 07:19:33.398976 systemd[1]: run-containerd-runc-k8s.io-ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a-runc.u8Hqsj.mount: Deactivated successfully. Aug 13 07:19:33.410910 systemd[1]: Started cri-containerd-ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a.scope - libcontainer container ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a. Aug 13 07:19:33.550162 containerd[1979]: time="2025-08-13T07:19:33.548022992Z" level=info msg="StartContainer for \"ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a\" returns successfully" Aug 13 07:19:33.687626 kubelet[3177]: I0813 07:19:33.687552 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v7ws9" podStartSLOduration=3.687528035 podStartE2EDuration="3.687528035s" podCreationTimestamp="2025-08-13 07:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:19:31.700401972 +0000 UTC m=+6.228921728" watchObservedRunningTime="2025-08-13 07:19:33.687528035 +0000 UTC m=+8.216047791" Aug 13 07:19:37.034799 update_engine[1959]: I20250813 07:19:37.034720 1959 update_attempter.cc:509] Updating boot flags... Aug 13 07:19:37.186964 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3538) Aug 13 07:19:37.556706 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3537) Aug 13 07:19:41.141819 sudo[2295]: pam_unix(sudo:session): session closed for user root Aug 13 07:19:41.168373 sshd[2292]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:41.175509 systemd[1]: sshd@6-172.31.17.50:22-147.75.109.163:47524.service: Deactivated successfully. Aug 13 07:19:41.176202 systemd-logind[1957]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:19:41.178044 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:19:41.178240 systemd[1]: session-7.scope: Consumed 5.544s CPU time, 141.6M memory peak, 0B memory swap peak. Aug 13 07:19:41.181352 systemd-logind[1957]: Removed session 7. Aug 13 07:19:46.718736 kubelet[3177]: I0813 07:19:46.715607 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-b4vml" podStartSLOduration=14.289280708 podStartE2EDuration="16.713558776s" podCreationTimestamp="2025-08-13 07:19:30 +0000 UTC" firstStartedPulling="2025-08-13 07:19:30.919423814 +0000 UTC m=+5.447943558" lastFinishedPulling="2025-08-13 07:19:33.343701874 +0000 UTC m=+7.872221626" observedRunningTime="2025-08-13 07:19:33.687887002 +0000 UTC m=+8.216406757" watchObservedRunningTime="2025-08-13 07:19:46.713558776 +0000 UTC m=+21.242078534" Aug 13 07:19:46.744179 systemd[1]: Created slice kubepods-besteffort-podafe89f99_96cd_48c5_bd26_4fe20087c93f.slice - libcontainer container kubepods-besteffort-podafe89f99_96cd_48c5_bd26_4fe20087c93f.slice. Aug 13 07:19:46.763703 kubelet[3177]: I0813 07:19:46.762109 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe89f99-96cd-48c5-bd26-4fe20087c93f-tigera-ca-bundle\") pod \"calico-typha-98f699587-dmvpl\" (UID: \"afe89f99-96cd-48c5-bd26-4fe20087c93f\") " pod="calico-system/calico-typha-98f699587-dmvpl" Aug 13 07:19:46.763703 kubelet[3177]: I0813 07:19:46.762154 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/afe89f99-96cd-48c5-bd26-4fe20087c93f-typha-certs\") pod \"calico-typha-98f699587-dmvpl\" (UID: \"afe89f99-96cd-48c5-bd26-4fe20087c93f\") " pod="calico-system/calico-typha-98f699587-dmvpl" Aug 13 07:19:46.763703 kubelet[3177]: I0813 07:19:46.762186 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbtg\" (UniqueName: \"kubernetes.io/projected/afe89f99-96cd-48c5-bd26-4fe20087c93f-kube-api-access-btbtg\") pod \"calico-typha-98f699587-dmvpl\" (UID: \"afe89f99-96cd-48c5-bd26-4fe20087c93f\") " pod="calico-system/calico-typha-98f699587-dmvpl" Aug 13 07:19:47.059701 containerd[1979]: time="2025-08-13T07:19:47.059274952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-98f699587-dmvpl,Uid:afe89f99-96cd-48c5-bd26-4fe20087c93f,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:47.086392 systemd[1]: Created slice kubepods-besteffort-podfa954576_06d5_4e36_8631_3c83ed5e13eb.slice - libcontainer container kubepods-besteffort-podfa954576_06d5_4e36_8631_3c83ed5e13eb.slice. Aug 13 07:19:47.110693 containerd[1979]: time="2025-08-13T07:19:47.110481206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:47.110693 containerd[1979]: time="2025-08-13T07:19:47.110540300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:47.110693 containerd[1979]: time="2025-08-13T07:19:47.110565684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:47.112445 containerd[1979]: time="2025-08-13T07:19:47.112326269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:47.164470 kubelet[3177]: I0813 07:19:47.164311 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa954576-06d5-4e36-8631-3c83ed5e13eb-node-certs\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164470 kubelet[3177]: I0813 07:19:47.164392 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-var-run-calico\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164470 kubelet[3177]: I0813 07:19:47.164410 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-cni-net-dir\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164470 kubelet[3177]: I0813 07:19:47.164426 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-flexvol-driver-host\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164470 kubelet[3177]: I0813 07:19:47.164476 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-cni-bin-dir\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164698 kubelet[3177]: I0813 07:19:47.164491 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa954576-06d5-4e36-8631-3c83ed5e13eb-tigera-ca-bundle\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164698 kubelet[3177]: I0813 07:19:47.164535 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-xtables-lock\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164698 kubelet[3177]: I0813 07:19:47.164552 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-lib-modules\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164698 kubelet[3177]: I0813 07:19:47.164571 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-cni-log-dir\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164698 kubelet[3177]: I0813 07:19:47.164629 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-policysync\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164844 kubelet[3177]: I0813 07:19:47.164644 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa954576-06d5-4e36-8631-3c83ed5e13eb-var-lib-calico\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.164844 kubelet[3177]: I0813 07:19:47.164705 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56gt\" (UniqueName: \"kubernetes.io/projected/fa954576-06d5-4e36-8631-3c83ed5e13eb-kube-api-access-m56gt\") pod \"calico-node-pznrm\" (UID: \"fa954576-06d5-4e36-8631-3c83ed5e13eb\") " pod="calico-system/calico-node-pznrm" Aug 13 07:19:47.175959 systemd[1]: Started cri-containerd-a875eccf081d30e8670143a6d236552af7a77cf45fc8a32f7c828520533037e1.scope - libcontainer container a875eccf081d30e8670143a6d236552af7a77cf45fc8a32f7c828520533037e1. Aug 13 07:19:47.244027 containerd[1979]: time="2025-08-13T07:19:47.243986377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-98f699587-dmvpl,Uid:afe89f99-96cd-48c5-bd26-4fe20087c93f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a875eccf081d30e8670143a6d236552af7a77cf45fc8a32f7c828520533037e1\"" Aug 13 07:19:47.246653 containerd[1979]: time="2025-08-13T07:19:47.246620576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:19:47.280075 kubelet[3177]: E0813 07:19:47.279570 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.280075 kubelet[3177]: W0813 07:19:47.279601 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.280075 kubelet[3177]: E0813 07:19:47.279632 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.288882 kubelet[3177]: E0813 07:19:47.288822 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.289033 kubelet[3177]: W0813 07:19:47.289013 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.289999 kubelet[3177]: E0813 07:19:47.289131 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.364447 kubelet[3177]: E0813 07:19:47.363106 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:47.365226 kubelet[3177]: E0813 07:19:47.365198 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.365613 kubelet[3177]: W0813 07:19:47.365429 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.365613 kubelet[3177]: E0813 07:19:47.365462 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.366881 kubelet[3177]: E0813 07:19:47.366502 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.366881 kubelet[3177]: W0813 07:19:47.366518 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.366881 kubelet[3177]: E0813 07:19:47.366537 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.368556 kubelet[3177]: E0813 07:19:47.367810 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.368556 kubelet[3177]: W0813 07:19:47.367979 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.368556 kubelet[3177]: E0813 07:19:47.368008 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.369034 kubelet[3177]: E0813 07:19:47.368943 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.369034 kubelet[3177]: W0813 07:19:47.368958 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.369034 kubelet[3177]: E0813 07:19:47.368974 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.370466 kubelet[3177]: E0813 07:19:47.370249 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.370466 kubelet[3177]: W0813 07:19:47.370265 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.370466 kubelet[3177]: E0813 07:19:47.370282 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.371052 kubelet[3177]: E0813 07:19:47.370828 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.371052 kubelet[3177]: W0813 07:19:47.370842 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.371052 kubelet[3177]: E0813 07:19:47.370856 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.371942 kubelet[3177]: E0813 07:19:47.371597 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.371942 kubelet[3177]: W0813 07:19:47.371714 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.371942 kubelet[3177]: E0813 07:19:47.371733 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.373392 kubelet[3177]: E0813 07:19:47.372585 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.373392 kubelet[3177]: W0813 07:19:47.372600 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.373392 kubelet[3177]: E0813 07:19:47.372615 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.374133 kubelet[3177]: E0813 07:19:47.373936 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.374133 kubelet[3177]: W0813 07:19:47.373951 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.374133 kubelet[3177]: E0813 07:19:47.373964 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.375356 kubelet[3177]: E0813 07:19:47.375342 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.375572 kubelet[3177]: W0813 07:19:47.375468 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.375659 kubelet[3177]: E0813 07:19:47.375646 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.376173 kubelet[3177]: E0813 07:19:47.376159 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.376355 kubelet[3177]: W0813 07:19:47.376338 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.376512 kubelet[3177]: E0813 07:19:47.376450 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.377975 kubelet[3177]: E0813 07:19:47.377628 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.377975 kubelet[3177]: W0813 07:19:47.377642 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.377975 kubelet[3177]: E0813 07:19:47.377656 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.378634 kubelet[3177]: E0813 07:19:47.378442 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.378634 kubelet[3177]: W0813 07:19:47.378457 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.378634 kubelet[3177]: E0813 07:19:47.378470 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.379628 kubelet[3177]: E0813 07:19:47.379249 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.379628 kubelet[3177]: W0813 07:19:47.379264 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.379628 kubelet[3177]: E0813 07:19:47.379277 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.380242 kubelet[3177]: E0813 07:19:47.379953 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.380242 kubelet[3177]: W0813 07:19:47.379968 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.380242 kubelet[3177]: E0813 07:19:47.379981 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.381015 kubelet[3177]: E0813 07:19:47.380720 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.381015 kubelet[3177]: W0813 07:19:47.380735 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.381015 kubelet[3177]: E0813 07:19:47.380749 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.381797 kubelet[3177]: E0813 07:19:47.381522 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.381797 kubelet[3177]: W0813 07:19:47.381537 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.381797 kubelet[3177]: E0813 07:19:47.381552 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.382826 kubelet[3177]: E0813 07:19:47.382440 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.382826 kubelet[3177]: W0813 07:19:47.382702 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.382826 kubelet[3177]: E0813 07:19:47.382717 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.383822 kubelet[3177]: E0813 07:19:47.383307 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.383822 kubelet[3177]: W0813 07:19:47.383321 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.383822 kubelet[3177]: E0813 07:19:47.383335 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.384119 kubelet[3177]: E0813 07:19:47.384056 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.384119 kubelet[3177]: W0813 07:19:47.384070 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.384119 kubelet[3177]: E0813 07:19:47.384084 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.385971 kubelet[3177]: E0813 07:19:47.385918 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.385971 kubelet[3177]: W0813 07:19:47.385933 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.385971 kubelet[3177]: E0813 07:19:47.385947 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.386324 kubelet[3177]: I0813 07:19:47.386166 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrbj\" (UniqueName: \"kubernetes.io/projected/4568b68e-bbde-4103-a6ee-c309f4a8bbce-kube-api-access-tlrbj\") pod \"csi-node-driver-xh28p\" (UID: \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\") " pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:47.393965 kubelet[3177]: E0813 07:19:47.393809 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.393965 kubelet[3177]: W0813 07:19:47.393832 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.393965 kubelet[3177]: E0813 07:19:47.393854 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.396888 containerd[1979]: time="2025-08-13T07:19:47.396783244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pznrm,Uid:fa954576-06d5-4e36-8631-3c83ed5e13eb,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:47.397706 kubelet[3177]: E0813 07:19:47.397135 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.397706 kubelet[3177]: W0813 07:19:47.397156 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.397706 kubelet[3177]: E0813 07:19:47.397179 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.398810 kubelet[3177]: E0813 07:19:47.398374 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.398810 kubelet[3177]: W0813 07:19:47.398392 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.398810 kubelet[3177]: E0813 07:19:47.398412 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.398810 kubelet[3177]: I0813 07:19:47.398461 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4568b68e-bbde-4103-a6ee-c309f4a8bbce-varrun\") pod \"csi-node-driver-xh28p\" (UID: \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\") " pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:47.400930 kubelet[3177]: E0813 07:19:47.400270 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.402171 kubelet[3177]: W0813 07:19:47.401781 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.402171 kubelet[3177]: E0813 07:19:47.401812 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.402171 kubelet[3177]: I0813 07:19:47.401844 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4568b68e-bbde-4103-a6ee-c309f4a8bbce-socket-dir\") pod \"csi-node-driver-xh28p\" (UID: \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\") " pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:47.405058 kubelet[3177]: E0813 07:19:47.405039 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.405727 kubelet[3177]: W0813 07:19:47.405283 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.407721 kubelet[3177]: E0813 07:19:47.405309 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.407721 kubelet[3177]: I0813 07:19:47.406381 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4568b68e-bbde-4103-a6ee-c309f4a8bbce-kubelet-dir\") pod \"csi-node-driver-xh28p\" (UID: \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\") " pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:47.408120 kubelet[3177]: E0813 07:19:47.408104 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.408990 kubelet[3177]: W0813 07:19:47.408247 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.409777 kubelet[3177]: E0813 07:19:47.409756 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.410013 kubelet[3177]: I0813 07:19:47.409995 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4568b68e-bbde-4103-a6ee-c309f4a8bbce-registration-dir\") pod \"csi-node-driver-xh28p\" (UID: \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\") " pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:47.411472 kubelet[3177]: E0813 07:19:47.410543 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.411472 kubelet[3177]: W0813 07:19:47.410567 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.411767 kubelet[3177]: E0813 07:19:47.411615 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.412700 kubelet[3177]: E0813 07:19:47.412540 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.412700 kubelet[3177]: W0813 07:19:47.412558 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.412700 kubelet[3177]: E0813 07:19:47.412647 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.413113 kubelet[3177]: E0813 07:19:47.412902 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.413113 kubelet[3177]: W0813 07:19:47.412915 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.413113 kubelet[3177]: E0813 07:19:47.413020 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.413263 kubelet[3177]: E0813 07:19:47.413173 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.413263 kubelet[3177]: W0813 07:19:47.413183 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.415713 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.416057 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.417532 kubelet[3177]: W0813 07:19:47.416071 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.416087 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.416336 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.417532 kubelet[3177]: W0813 07:19:47.416347 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.416359 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.416605 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.417532 kubelet[3177]: W0813 07:19:47.416618 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.417532 kubelet[3177]: E0813 07:19:47.416631 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.418092 kubelet[3177]: E0813 07:19:47.416920 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.418092 kubelet[3177]: W0813 07:19:47.416932 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.418092 kubelet[3177]: E0813 07:19:47.416945 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.457378 containerd[1979]: time="2025-08-13T07:19:47.457249874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:19:47.457378 containerd[1979]: time="2025-08-13T07:19:47.457321694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:19:47.457378 containerd[1979]: time="2025-08-13T07:19:47.457336716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:47.461705 containerd[1979]: time="2025-08-13T07:19:47.457452061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:19:47.498896 systemd[1]: Started cri-containerd-e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce.scope - libcontainer container e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce. Aug 13 07:19:47.513214 kubelet[3177]: E0813 07:19:47.513191 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.513400 kubelet[3177]: W0813 07:19:47.513383 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.513500 kubelet[3177]: E0813 07:19:47.513487 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.514025 kubelet[3177]: E0813 07:19:47.513999 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.514144 kubelet[3177]: W0813 07:19:47.514131 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.514258 kubelet[3177]: E0813 07:19:47.514246 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.515924 kubelet[3177]: E0813 07:19:47.514814 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.516049 kubelet[3177]: W0813 07:19:47.516032 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.516168 kubelet[3177]: E0813 07:19:47.516154 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.516635 kubelet[3177]: E0813 07:19:47.516618 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.516783 kubelet[3177]: W0813 07:19:47.516768 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.516891 kubelet[3177]: E0813 07:19:47.516879 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.517438 kubelet[3177]: E0813 07:19:47.517424 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.518776 kubelet[3177]: W0813 07:19:47.517627 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.518776 kubelet[3177]: E0813 07:19:47.517653 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.519300 kubelet[3177]: E0813 07:19:47.519286 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.519484 kubelet[3177]: W0813 07:19:47.519395 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.519484 kubelet[3177]: E0813 07:19:47.519462 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.520067 kubelet[3177]: E0813 07:19:47.519937 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.520067 kubelet[3177]: W0813 07:19:47.519952 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.520067 kubelet[3177]: E0813 07:19:47.520045 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.521174 kubelet[3177]: E0813 07:19:47.520910 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.521174 kubelet[3177]: W0813 07:19:47.520924 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.522843 kubelet[3177]: E0813 07:19:47.522700 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.523158 kubelet[3177]: E0813 07:19:47.523091 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.523158 kubelet[3177]: W0813 07:19:47.523103 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.523380 kubelet[3177]: E0813 07:19:47.523283 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.523751 kubelet[3177]: E0813 07:19:47.523633 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.523751 kubelet[3177]: W0813 07:19:47.523645 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.523982 kubelet[3177]: E0813 07:19:47.523909 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.524139 kubelet[3177]: E0813 07:19:47.524078 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.524139 kubelet[3177]: W0813 07:19:47.524090 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.524848 kubelet[3177]: E0813 07:19:47.524775 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.525112 kubelet[3177]: E0813 07:19:47.524973 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.525112 kubelet[3177]: W0813 07:19:47.524983 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.525224 kubelet[3177]: E0813 07:19:47.525200 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.525224 kubelet[3177]: W0813 07:19:47.525212 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.526077 kubelet[3177]: E0813 07:19:47.525768 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.526077 kubelet[3177]: E0813 07:19:47.525782 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.526077 kubelet[3177]: W0813 07:19:47.525796 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.526077 kubelet[3177]: E0813 07:19:47.525847 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.526077 kubelet[3177]: E0813 07:19:47.525865 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.526329 kubelet[3177]: E0813 07:19:47.526258 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.526329 kubelet[3177]: W0813 07:19:47.526270 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.526810 kubelet[3177]: E0813 07:19:47.526768 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.527267 kubelet[3177]: E0813 07:19:47.526954 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.527267 kubelet[3177]: W0813 07:19:47.526967 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.527267 kubelet[3177]: E0813 07:19:47.527048 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.527851 kubelet[3177]: E0813 07:19:47.527832 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.527851 kubelet[3177]: W0813 07:19:47.527851 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.527969 kubelet[3177]: E0813 07:19:47.527943 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.532585 kubelet[3177]: E0813 07:19:47.529798 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.532585 kubelet[3177]: W0813 07:19:47.529813 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.532585 kubelet[3177]: E0813 07:19:47.529841 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.532585 kubelet[3177]: E0813 07:19:47.530056 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.532585 kubelet[3177]: W0813 07:19:47.530067 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.532585 kubelet[3177]: E0813 07:19:47.530159 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.532585 kubelet[3177]: E0813 07:19:47.530321 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.532585 kubelet[3177]: W0813 07:19:47.530335 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.532585 kubelet[3177]: E0813 07:19:47.530798 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.533206 kubelet[3177]: E0813 07:19:47.533187 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.533206 kubelet[3177]: W0813 07:19:47.533205 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.533320 kubelet[3177]: E0813 07:19:47.533239 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.535690 kubelet[3177]: E0813 07:19:47.533988 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.535690 kubelet[3177]: W0813 07:19:47.534004 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.535690 kubelet[3177]: E0813 07:19:47.534090 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.535909 kubelet[3177]: E0813 07:19:47.535892 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.535963 kubelet[3177]: W0813 07:19:47.535911 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.536017 kubelet[3177]: E0813 07:19:47.536009 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.538342 kubelet[3177]: E0813 07:19:47.536169 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.538342 kubelet[3177]: W0813 07:19:47.536185 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.538342 kubelet[3177]: E0813 07:19:47.536198 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.538342 kubelet[3177]: E0813 07:19:47.536462 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.538342 kubelet[3177]: W0813 07:19:47.536471 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.538342 kubelet[3177]: E0813 07:19:47.536483 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.549964 kubelet[3177]: E0813 07:19:47.549932 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:47.549964 kubelet[3177]: W0813 07:19:47.549961 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:47.551132 kubelet[3177]: E0813 07:19:47.549986 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:47.692785 containerd[1979]: time="2025-08-13T07:19:47.692541834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pznrm,Uid:fa954576-06d5-4e36-8631-3c83ed5e13eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\"" Aug 13 07:19:48.634174 kubelet[3177]: E0813 07:19:48.631109 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:48.632657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount972492466.mount: Deactivated successfully. Aug 13 07:19:49.546591 containerd[1979]: time="2025-08-13T07:19:49.546539409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:49.547884 containerd[1979]: time="2025-08-13T07:19:49.547767254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:19:49.550049 containerd[1979]: time="2025-08-13T07:19:49.548897270Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:49.551840 containerd[1979]: time="2025-08-13T07:19:49.551148932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:49.551840 containerd[1979]: time="2025-08-13T07:19:49.551749487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.305090175s" Aug 13 07:19:49.579215 containerd[1979]: time="2025-08-13T07:19:49.551776885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:19:49.584168 containerd[1979]: time="2025-08-13T07:19:49.583968842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:19:49.602430 containerd[1979]: time="2025-08-13T07:19:49.602324084Z" level=info msg="CreateContainer within sandbox \"a875eccf081d30e8670143a6d236552af7a77cf45fc8a32f7c828520533037e1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:19:49.619215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945720980.mount: Deactivated successfully. Aug 13 07:19:49.624710 containerd[1979]: time="2025-08-13T07:19:49.623525282Z" level=info msg="CreateContainer within sandbox \"a875eccf081d30e8670143a6d236552af7a77cf45fc8a32f7c828520533037e1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"49438adde99056187cb42d7436d9351871fe909ba05fd6662efa541261072c31\"" Aug 13 07:19:49.625039 containerd[1979]: time="2025-08-13T07:19:49.625017663Z" level=info msg="StartContainer for \"49438adde99056187cb42d7436d9351871fe909ba05fd6662efa541261072c31\"" Aug 13 07:19:49.667191 systemd[1]: Started cri-containerd-49438adde99056187cb42d7436d9351871fe909ba05fd6662efa541261072c31.scope - libcontainer container 49438adde99056187cb42d7436d9351871fe909ba05fd6662efa541261072c31. Aug 13 07:19:49.715555 containerd[1979]: time="2025-08-13T07:19:49.715511140Z" level=info msg="StartContainer for \"49438adde99056187cb42d7436d9351871fe909ba05fd6662efa541261072c31\" returns successfully" Aug 13 07:19:49.806286 kubelet[3177]: E0813 07:19:49.805932 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.806286 kubelet[3177]: W0813 07:19:49.805965 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.806286 kubelet[3177]: E0813 07:19:49.806091 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.809350 kubelet[3177]: E0813 07:19:49.809009 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.809350 kubelet[3177]: W0813 07:19:49.809041 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.809350 kubelet[3177]: E0813 07:19:49.809063 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.809350 kubelet[3177]: E0813 07:19:49.809273 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.809350 kubelet[3177]: W0813 07:19:49.809279 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.809350 kubelet[3177]: E0813 07:19:49.809288 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.813455 kubelet[3177]: E0813 07:19:49.811916 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.813455 kubelet[3177]: W0813 07:19:49.811935 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.813455 kubelet[3177]: E0813 07:19:49.811952 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.813974 kubelet[3177]: E0813 07:19:49.813951 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.813974 kubelet[3177]: W0813 07:19:49.813971 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.814093 kubelet[3177]: E0813 07:19:49.814078 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.814305 kubelet[3177]: E0813 07:19:49.814287 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.814361 kubelet[3177]: W0813 07:19:49.814309 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.814361 kubelet[3177]: E0813 07:19:49.814319 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.814541 kubelet[3177]: E0813 07:19:49.814527 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.814541 kubelet[3177]: W0813 07:19:49.814540 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.814619 kubelet[3177]: E0813 07:19:49.814562 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.814816 kubelet[3177]: E0813 07:19:49.814796 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.814816 kubelet[3177]: W0813 07:19:49.814809 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.814912 kubelet[3177]: E0813 07:19:49.814819 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.815733 kubelet[3177]: E0813 07:19:49.815067 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.815733 kubelet[3177]: W0813 07:19:49.815075 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.815733 kubelet[3177]: E0813 07:19:49.815085 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.815733 kubelet[3177]: E0813 07:19:49.815278 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.815733 kubelet[3177]: W0813 07:19:49.815285 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.815733 kubelet[3177]: E0813 07:19:49.815294 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.815733 kubelet[3177]: E0813 07:19:49.815489 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.815733 kubelet[3177]: W0813 07:19:49.815496 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.815733 kubelet[3177]: E0813 07:19:49.815506 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.816079 kubelet[3177]: E0813 07:19:49.815903 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.816079 kubelet[3177]: W0813 07:19:49.815912 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.816079 kubelet[3177]: E0813 07:19:49.815922 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.816169 kubelet[3177]: E0813 07:19:49.816115 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.816169 kubelet[3177]: W0813 07:19:49.816122 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.816169 kubelet[3177]: E0813 07:19:49.816129 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.816296 kubelet[3177]: E0813 07:19:49.816277 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.816296 kubelet[3177]: W0813 07:19:49.816295 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.816477 kubelet[3177]: E0813 07:19:49.816306 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.816477 kubelet[3177]: E0813 07:19:49.816458 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.816477 kubelet[3177]: W0813 07:19:49.816463 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.816477 kubelet[3177]: E0813 07:19:49.816471 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.845077 kubelet[3177]: E0813 07:19:49.845041 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.845077 kubelet[3177]: W0813 07:19:49.845068 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.845358 kubelet[3177]: E0813 07:19:49.845089 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.845900 kubelet[3177]: E0813 07:19:49.845881 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.845900 kubelet[3177]: W0813 07:19:49.845899 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.845998 kubelet[3177]: E0813 07:19:49.845923 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.847079 kubelet[3177]: E0813 07:19:49.847057 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.847079 kubelet[3177]: W0813 07:19:49.847076 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.847218 kubelet[3177]: E0813 07:19:49.847103 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.847437 kubelet[3177]: E0813 07:19:49.847413 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.847437 kubelet[3177]: W0813 07:19:49.847429 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.847511 kubelet[3177]: E0813 07:19:49.847453 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.848858 kubelet[3177]: E0813 07:19:49.848729 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.848858 kubelet[3177]: W0813 07:19:49.848745 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.848858 kubelet[3177]: E0813 07:19:49.848774 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.849250 kubelet[3177]: E0813 07:19:49.849150 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.849250 kubelet[3177]: W0813 07:19:49.849160 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.849250 kubelet[3177]: E0813 07:19:49.849188 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.849515 kubelet[3177]: E0813 07:19:49.849503 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.849651 kubelet[3177]: W0813 07:19:49.849558 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.849729 kubelet[3177]: E0813 07:19:49.849719 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.849931 kubelet[3177]: E0813 07:19:49.849823 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.849931 kubelet[3177]: W0813 07:19:49.849831 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.850024 kubelet[3177]: E0813 07:19:49.850014 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.850119 kubelet[3177]: E0813 07:19:49.850113 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.850195 kubelet[3177]: W0813 07:19:49.850175 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.850499 kubelet[3177]: E0813 07:19:49.850238 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.850662 kubelet[3177]: E0813 07:19:49.850653 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.850751 kubelet[3177]: W0813 07:19:49.850741 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.850811 kubelet[3177]: E0813 07:19:49.850803 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.851756 kubelet[3177]: E0813 07:19:49.851741 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.851824 kubelet[3177]: W0813 07:19:49.851814 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.851901 kubelet[3177]: E0813 07:19:49.851875 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.852188 kubelet[3177]: E0813 07:19:49.852177 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.852266 kubelet[3177]: W0813 07:19:49.852237 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.852373 kubelet[3177]: E0813 07:19:49.852325 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.852688 kubelet[3177]: E0813 07:19:49.852607 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.852688 kubelet[3177]: W0813 07:19:49.852617 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.852688 kubelet[3177]: E0813 07:19:49.852629 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.853228 kubelet[3177]: E0813 07:19:49.852993 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.853228 kubelet[3177]: W0813 07:19:49.853003 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.853228 kubelet[3177]: E0813 07:19:49.853016 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.853805 kubelet[3177]: E0813 07:19:49.853783 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.853891 kubelet[3177]: W0813 07:19:49.853869 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.854882 kubelet[3177]: E0813 07:19:49.854042 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.855021 kubelet[3177]: E0813 07:19:49.855010 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.855132 kubelet[3177]: W0813 07:19:49.855123 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.855239 kubelet[3177]: E0813 07:19:49.855186 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.855506 kubelet[3177]: E0813 07:19:49.855495 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.855736 kubelet[3177]: W0813 07:19:49.855691 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.855736 kubelet[3177]: E0813 07:19:49.855707 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:49.856973 kubelet[3177]: E0813 07:19:49.856924 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:49.856973 kubelet[3177]: W0813 07:19:49.856937 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:49.856973 kubelet[3177]: E0813 07:19:49.856949 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.631922 kubelet[3177]: E0813 07:19:50.631864 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:50.779336 kubelet[3177]: I0813 07:19:50.779148 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:19:50.824120 kubelet[3177]: E0813 07:19:50.824085 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.824120 kubelet[3177]: W0813 07:19:50.824112 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.824622 kubelet[3177]: E0813 07:19:50.824210 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.824978 kubelet[3177]: E0813 07:19:50.824953 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.824978 kubelet[3177]: W0813 07:19:50.824971 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.825132 kubelet[3177]: E0813 07:19:50.824990 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.825237 kubelet[3177]: E0813 07:19:50.825213 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.825237 kubelet[3177]: W0813 07:19:50.825229 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.825352 kubelet[3177]: E0813 07:19:50.825242 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.825794 kubelet[3177]: E0813 07:19:50.825521 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.825794 kubelet[3177]: W0813 07:19:50.825537 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.825794 kubelet[3177]: E0813 07:19:50.825552 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.825958 kubelet[3177]: E0813 07:19:50.825921 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.825958 kubelet[3177]: W0813 07:19:50.825934 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.825958 kubelet[3177]: E0813 07:19:50.825950 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.826546 kubelet[3177]: E0813 07:19:50.826438 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.826546 kubelet[3177]: W0813 07:19:50.826453 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.826546 kubelet[3177]: E0813 07:19:50.826467 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.826795 kubelet[3177]: E0813 07:19:50.826700 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.826795 kubelet[3177]: W0813 07:19:50.826711 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.826795 kubelet[3177]: E0813 07:19:50.826723 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.827182 kubelet[3177]: E0813 07:19:50.827166 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.827262 kubelet[3177]: W0813 07:19:50.827184 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.827262 kubelet[3177]: E0813 07:19:50.827197 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.827466 kubelet[3177]: E0813 07:19:50.827429 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.827466 kubelet[3177]: W0813 07:19:50.827439 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.827466 kubelet[3177]: E0813 07:19:50.827451 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.828183 kubelet[3177]: E0813 07:19:50.827651 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.828183 kubelet[3177]: W0813 07:19:50.827661 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.828183 kubelet[3177]: E0813 07:19:50.827683 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.828183 kubelet[3177]: E0813 07:19:50.827887 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.828183 kubelet[3177]: W0813 07:19:50.827896 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.828183 kubelet[3177]: E0813 07:19:50.827909 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.828183 kubelet[3177]: E0813 07:19:50.828115 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.828183 kubelet[3177]: W0813 07:19:50.828126 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.828183 kubelet[3177]: E0813 07:19:50.828138 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.828701 kubelet[3177]: E0813 07:19:50.828644 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.828701 kubelet[3177]: W0813 07:19:50.828656 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.828701 kubelet[3177]: E0813 07:19:50.828682 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.829639 kubelet[3177]: E0813 07:19:50.828927 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.829639 kubelet[3177]: W0813 07:19:50.828940 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.829639 kubelet[3177]: E0813 07:19:50.828954 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.829639 kubelet[3177]: E0813 07:19:50.829159 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.829639 kubelet[3177]: W0813 07:19:50.829169 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.829639 kubelet[3177]: E0813 07:19:50.829180 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.856708 kubelet[3177]: E0813 07:19:50.856637 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.857183 kubelet[3177]: W0813 07:19:50.856662 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.857263 kubelet[3177]: E0813 07:19:50.857198 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.858702 kubelet[3177]: E0813 07:19:50.858658 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.858702 kubelet[3177]: W0813 07:19:50.858701 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.858878 kubelet[3177]: E0813 07:19:50.858743 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.859511 kubelet[3177]: E0813 07:19:50.859382 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.859511 kubelet[3177]: W0813 07:19:50.859400 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.859511 kubelet[3177]: E0813 07:19:50.859434 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.859945 kubelet[3177]: E0813 07:19:50.859920 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.859945 kubelet[3177]: W0813 07:19:50.859938 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.860072 kubelet[3177]: E0813 07:19:50.859958 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.860563 kubelet[3177]: E0813 07:19:50.860545 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.860635 kubelet[3177]: W0813 07:19:50.860563 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.860697 kubelet[3177]: E0813 07:19:50.860647 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.860978 kubelet[3177]: E0813 07:19:50.860854 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.860978 kubelet[3177]: W0813 07:19:50.860866 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.860978 kubelet[3177]: E0813 07:19:50.860964 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.861254 kubelet[3177]: E0813 07:19:50.861157 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.861530 kubelet[3177]: W0813 07:19:50.861169 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.861723 kubelet[3177]: E0813 07:19:50.861593 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.861778 kubelet[3177]: E0813 07:19:50.861746 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.861778 kubelet[3177]: W0813 07:19:50.861757 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.861866 kubelet[3177]: E0813 07:19:50.861775 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.862608 kubelet[3177]: E0813 07:19:50.862438 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.862608 kubelet[3177]: W0813 07:19:50.862453 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.862608 kubelet[3177]: E0813 07:19:50.862550 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.863824 kubelet[3177]: E0813 07:19:50.863805 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.863824 kubelet[3177]: W0813 07:19:50.863820 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.863945 kubelet[3177]: E0813 07:19:50.863925 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.864201 kubelet[3177]: E0813 07:19:50.864159 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.864201 kubelet[3177]: W0813 07:19:50.864184 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.864622 kubelet[3177]: E0813 07:19:50.864379 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.864844 kubelet[3177]: E0813 07:19:50.864804 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.864844 kubelet[3177]: W0813 07:19:50.864817 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.865148 kubelet[3177]: E0813 07:19:50.865017 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.865566 kubelet[3177]: E0813 07:19:50.865441 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.865566 kubelet[3177]: W0813 07:19:50.865457 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.865914 kubelet[3177]: E0813 07:19:50.865725 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.866241 kubelet[3177]: E0813 07:19:50.866076 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.866241 kubelet[3177]: W0813 07:19:50.866090 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.866241 kubelet[3177]: E0813 07:19:50.866108 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.866547 kubelet[3177]: E0813 07:19:50.866450 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.866547 kubelet[3177]: W0813 07:19:50.866474 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.866547 kubelet[3177]: E0813 07:19:50.866494 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.867537 kubelet[3177]: E0813 07:19:50.867519 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.867636 kubelet[3177]: W0813 07:19:50.867538 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.867636 kubelet[3177]: E0813 07:19:50.867554 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.868858 kubelet[3177]: E0813 07:19:50.868840 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.869088 kubelet[3177]: W0813 07:19:50.868858 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.869088 kubelet[3177]: E0813 07:19:50.868898 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.869732 kubelet[3177]: E0813 07:19:50.869711 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:19:50.869732 kubelet[3177]: W0813 07:19:50.869728 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:19:50.869986 kubelet[3177]: E0813 07:19:50.869750 3177 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:19:50.875664 containerd[1979]: time="2025-08-13T07:19:50.874992602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:50.877324 containerd[1979]: time="2025-08-13T07:19:50.877274314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:19:50.890588 containerd[1979]: time="2025-08-13T07:19:50.888939639Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:50.893283 containerd[1979]: time="2025-08-13T07:19:50.893240661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:50.894245 containerd[1979]: time="2025-08-13T07:19:50.894201036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.310189347s" Aug 13 07:19:50.894321 containerd[1979]: time="2025-08-13T07:19:50.894252243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:19:50.897548 containerd[1979]: time="2025-08-13T07:19:50.897513137Z" level=info msg="CreateContainer within sandbox \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:19:50.920229 containerd[1979]: time="2025-08-13T07:19:50.920178952Z" level=info msg="CreateContainer within sandbox \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1\"" Aug 13 07:19:50.921108 containerd[1979]: time="2025-08-13T07:19:50.921050822Z" level=info msg="StartContainer for \"bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1\"" Aug 13 07:19:50.978894 systemd[1]: Started cri-containerd-bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1.scope - libcontainer container bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1. Aug 13 07:19:51.012475 containerd[1979]: time="2025-08-13T07:19:51.012102020Z" level=info msg="StartContainer for \"bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1\" returns successfully" Aug 13 07:19:51.025703 systemd[1]: cri-containerd-bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1.scope: Deactivated successfully. Aug 13 07:19:51.066199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1-rootfs.mount: Deactivated successfully. Aug 13 07:19:51.261578 containerd[1979]: time="2025-08-13T07:19:51.259908015Z" level=info msg="shim disconnected" id=bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1 namespace=k8s.io Aug 13 07:19:51.261578 containerd[1979]: time="2025-08-13T07:19:51.261486315Z" level=warning msg="cleaning up after shim disconnected" id=bd02b2a9d59c643dfb01ab52e84738c6adf5f65c989fe92c12300cde099115b1 namespace=k8s.io Aug 13 07:19:51.261578 containerd[1979]: time="2025-08-13T07:19:51.261499029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:19:51.784060 containerd[1979]: time="2025-08-13T07:19:51.783566579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:19:51.800115 kubelet[3177]: I0813 07:19:51.800049 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-98f699587-dmvpl" podStartSLOduration=3.46296836 podStartE2EDuration="5.800033438s" podCreationTimestamp="2025-08-13 07:19:46 +0000 UTC" firstStartedPulling="2025-08-13 07:19:47.246191731 +0000 UTC m=+21.774711473" lastFinishedPulling="2025-08-13 07:19:49.583256773 +0000 UTC m=+24.111776551" observedRunningTime="2025-08-13 07:19:49.793454828 +0000 UTC m=+24.321974584" watchObservedRunningTime="2025-08-13 07:19:51.800033438 +0000 UTC m=+26.328553211" Aug 13 07:19:52.631804 kubelet[3177]: E0813 07:19:52.631760 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:54.631683 kubelet[3177]: E0813 07:19:54.631615 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:54.761223 containerd[1979]: time="2025-08-13T07:19:54.761167978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:54.762344 containerd[1979]: time="2025-08-13T07:19:54.762174935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:19:54.764178 containerd[1979]: time="2025-08-13T07:19:54.763272697Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:54.765556 containerd[1979]: time="2025-08-13T07:19:54.765520777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:19:54.766410 containerd[1979]: time="2025-08-13T07:19:54.766375101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.982731601s" Aug 13 07:19:54.766575 containerd[1979]: time="2025-08-13T07:19:54.766549107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:19:54.769783 containerd[1979]: time="2025-08-13T07:19:54.769737319Z" level=info msg="CreateContainer within sandbox \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:19:54.799450 containerd[1979]: time="2025-08-13T07:19:54.799209794Z" level=info msg="CreateContainer within sandbox \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0\"" Aug 13 07:19:54.799797 containerd[1979]: time="2025-08-13T07:19:54.799761852Z" level=info msg="StartContainer for \"a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0\"" Aug 13 07:19:54.839293 systemd[1]: run-containerd-runc-k8s.io-a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0-runc.gpgNxR.mount: Deactivated successfully. Aug 13 07:19:54.844879 systemd[1]: Started cri-containerd-a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0.scope - libcontainer container a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0. Aug 13 07:19:54.884026 containerd[1979]: time="2025-08-13T07:19:54.883892733Z" level=info msg="StartContainer for \"a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0\" returns successfully" Aug 13 07:19:55.738729 systemd[1]: cri-containerd-a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0.scope: Deactivated successfully. Aug 13 07:19:55.776909 kubelet[3177]: I0813 07:19:55.776855 3177 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:19:55.781166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0-rootfs.mount: Deactivated successfully. Aug 13 07:19:55.887055 systemd[1]: Created slice kubepods-burstable-podbd32d8f1_1e08_483e_bee4_9fb610250047.slice - libcontainer container kubepods-burstable-podbd32d8f1_1e08_483e_bee4_9fb610250047.slice. Aug 13 07:19:55.895515 systemd[1]: Created slice kubepods-burstable-pod78b6f36b_2fe6_4969_b860_8d05648c6f1d.slice - libcontainer container kubepods-burstable-pod78b6f36b_2fe6_4969_b860_8d05648c6f1d.slice. Aug 13 07:19:55.903333 systemd[1]: Created slice kubepods-besteffort-pod8fda3464_db28_46a1_ac0f_62ebd7a09936.slice - libcontainer container kubepods-besteffort-pod8fda3464_db28_46a1_ac0f_62ebd7a09936.slice. Aug 13 07:19:55.910647 systemd[1]: Created slice kubepods-besteffort-poddc25405d_dcbb_4f86_bd65_6c51296b34d0.slice - libcontainer container kubepods-besteffort-poddc25405d_dcbb_4f86_bd65_6c51296b34d0.slice. Aug 13 07:19:55.918725 systemd[1]: Created slice kubepods-besteffort-pod55eb4daa_f610_43f7_8938_3afe0a026eea.slice - libcontainer container kubepods-besteffort-pod55eb4daa_f610_43f7_8938_3afe0a026eea.slice. Aug 13 07:19:55.933250 kubelet[3177]: W0813 07:19:55.933053 3177 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-17-50" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-50' and this object Aug 13 07:19:55.933250 kubelet[3177]: E0813 07:19:55.933118 3177 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-17-50\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-50' and this object" logger="UnhandledError" Aug 13 07:19:55.935697 kubelet[3177]: W0813 07:19:55.934396 3177 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-17-50" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-50' and this object Aug 13 07:19:55.935697 kubelet[3177]: E0813 07:19:55.934433 3177 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-17-50\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-50' and this object" logger="UnhandledError" Aug 13 07:19:55.935697 kubelet[3177]: W0813 07:19:55.934494 3177 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-17-50" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-17-50' and this object Aug 13 07:19:55.935697 kubelet[3177]: E0813 07:19:55.934512 3177 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-17-50\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-50' and this object" logger="UnhandledError" Aug 13 07:19:55.948355 systemd[1]: Created slice kubepods-besteffort-podeec34e8b_75a2_4179_ad14_ba5b2ef8362a.slice - libcontainer container kubepods-besteffort-podeec34e8b_75a2_4179_ad14_ba5b2ef8362a.slice. Aug 13 07:19:55.960195 systemd[1]: Created slice kubepods-besteffort-podc765fc0f_d20b_48b8_a94e_e269e9f26813.slice - libcontainer container kubepods-besteffort-podc765fc0f_d20b_48b8_a94e_e269e9f26813.slice. Aug 13 07:19:56.003210 kubelet[3177]: I0813 07:19:56.003021 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq4d\" (UniqueName: \"kubernetes.io/projected/c765fc0f-d20b-48b8-a94e-e269e9f26813-kube-api-access-sbq4d\") pod \"calico-apiserver-745bf746c6-vph99\" (UID: \"c765fc0f-d20b-48b8-a94e-e269e9f26813\") " pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" Aug 13 07:19:56.009974 kubelet[3177]: I0813 07:19:56.003631 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8kl\" (UniqueName: \"kubernetes.io/projected/8fda3464-db28-46a1-ac0f-62ebd7a09936-kube-api-access-qd8kl\") pod \"calico-kube-controllers-5696577845-qp9gz\" (UID: \"8fda3464-db28-46a1-ac0f-62ebd7a09936\") " pod="calico-system/calico-kube-controllers-5696577845-qp9gz" Aug 13 07:19:56.009974 kubelet[3177]: I0813 07:19:56.003751 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-backend-key-pair\") pod \"whisker-59f745bdfb-2h8n7\" (UID: \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\") " pod="calico-system/whisker-59f745bdfb-2h8n7" Aug 13 07:19:56.009974 kubelet[3177]: I0813 07:19:56.003785 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78b6f36b-2fe6-4969-b860-8d05648c6f1d-config-volume\") pod \"coredns-7c65d6cfc9-f4dvw\" (UID: \"78b6f36b-2fe6-4969-b860-8d05648c6f1d\") " pod="kube-system/coredns-7c65d6cfc9-f4dvw" Aug 13 07:19:56.009974 kubelet[3177]: I0813 07:19:56.003829 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnnx\" (UniqueName: \"kubernetes.io/projected/55eb4daa-f610-43f7-8938-3afe0a026eea-kube-api-access-9pnnx\") pod \"calico-apiserver-745bf746c6-bcs2t\" (UID: \"55eb4daa-f610-43f7-8938-3afe0a026eea\") " pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" Aug 13 07:19:56.009974 kubelet[3177]: I0813 07:19:56.003878 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g47kx\" (UniqueName: \"kubernetes.io/projected/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-kube-api-access-g47kx\") pod \"whisker-59f745bdfb-2h8n7\" (UID: \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\") " pod="calico-system/whisker-59f745bdfb-2h8n7" Aug 13 07:19:56.010537 kubelet[3177]: I0813 07:19:56.003919 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmrz\" (UniqueName: \"kubernetes.io/projected/78b6f36b-2fe6-4969-b860-8d05648c6f1d-kube-api-access-dcmrz\") pod \"coredns-7c65d6cfc9-f4dvw\" (UID: \"78b6f36b-2fe6-4969-b860-8d05648c6f1d\") " pod="kube-system/coredns-7c65d6cfc9-f4dvw" Aug 13 07:19:56.010537 kubelet[3177]: I0813 07:19:56.003951 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnl44\" (UniqueName: \"kubernetes.io/projected/bd32d8f1-1e08-483e-bee4-9fb610250047-kube-api-access-xnl44\") pod \"coredns-7c65d6cfc9-k7r7v\" (UID: \"bd32d8f1-1e08-483e-bee4-9fb610250047\") " pod="kube-system/coredns-7c65d6cfc9-k7r7v" Aug 13 07:19:56.010537 kubelet[3177]: I0813 07:19:56.003990 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-ca-bundle\") pod \"whisker-59f745bdfb-2h8n7\" (UID: \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\") " pod="calico-system/whisker-59f745bdfb-2h8n7" Aug 13 07:19:56.010537 kubelet[3177]: I0813 07:19:56.004015 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dc25405d-dcbb-4f86-bd65-6c51296b34d0-goldmane-key-pair\") pod \"goldmane-58fd7646b9-wq9m9\" (UID: \"dc25405d-dcbb-4f86-bd65-6c51296b34d0\") " pod="calico-system/goldmane-58fd7646b9-wq9m9" Aug 13 07:19:56.010537 kubelet[3177]: I0813 07:19:56.004044 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/55eb4daa-f610-43f7-8938-3afe0a026eea-calico-apiserver-certs\") pod \"calico-apiserver-745bf746c6-bcs2t\" (UID: \"55eb4daa-f610-43f7-8938-3afe0a026eea\") " pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" Aug 13 07:19:56.010781 kubelet[3177]: I0813 07:19:56.004075 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd32d8f1-1e08-483e-bee4-9fb610250047-config-volume\") pod \"coredns-7c65d6cfc9-k7r7v\" (UID: \"bd32d8f1-1e08-483e-bee4-9fb610250047\") " pod="kube-system/coredns-7c65d6cfc9-k7r7v" Aug 13 07:19:56.010781 kubelet[3177]: I0813 07:19:56.004100 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc25405d-dcbb-4f86-bd65-6c51296b34d0-config\") pod \"goldmane-58fd7646b9-wq9m9\" (UID: \"dc25405d-dcbb-4f86-bd65-6c51296b34d0\") " pod="calico-system/goldmane-58fd7646b9-wq9m9" Aug 13 07:19:56.010781 kubelet[3177]: I0813 07:19:56.004125 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc25405d-dcbb-4f86-bd65-6c51296b34d0-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-wq9m9\" (UID: \"dc25405d-dcbb-4f86-bd65-6c51296b34d0\") " pod="calico-system/goldmane-58fd7646b9-wq9m9" Aug 13 07:19:56.010781 kubelet[3177]: I0813 07:19:56.004169 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c765fc0f-d20b-48b8-a94e-e269e9f26813-calico-apiserver-certs\") pod \"calico-apiserver-745bf746c6-vph99\" (UID: \"c765fc0f-d20b-48b8-a94e-e269e9f26813\") " pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" Aug 13 07:19:56.010781 kubelet[3177]: I0813 07:19:56.004200 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fda3464-db28-46a1-ac0f-62ebd7a09936-tigera-ca-bundle\") pod \"calico-kube-controllers-5696577845-qp9gz\" (UID: \"8fda3464-db28-46a1-ac0f-62ebd7a09936\") " pod="calico-system/calico-kube-controllers-5696577845-qp9gz" Aug 13 07:19:56.011065 kubelet[3177]: I0813 07:19:56.004223 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk52v\" (UniqueName: \"kubernetes.io/projected/dc25405d-dcbb-4f86-bd65-6c51296b34d0-kube-api-access-wk52v\") pod \"goldmane-58fd7646b9-wq9m9\" (UID: \"dc25405d-dcbb-4f86-bd65-6c51296b34d0\") " pod="calico-system/goldmane-58fd7646b9-wq9m9" Aug 13 07:19:56.029627 containerd[1979]: time="2025-08-13T07:19:56.029396337Z" level=info msg="shim disconnected" id=a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0 namespace=k8s.io Aug 13 07:19:56.029627 containerd[1979]: time="2025-08-13T07:19:56.029451759Z" level=warning msg="cleaning up after shim disconnected" id=a8d5e22ec73878b46af0ef8f777a9151e4f65bbf97276e0a471b986aefa6e6e0 namespace=k8s.io Aug 13 07:19:56.029627 containerd[1979]: time="2025-08-13T07:19:56.029460181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:19:56.199219 containerd[1979]: time="2025-08-13T07:19:56.199166282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7r7v,Uid:bd32d8f1-1e08-483e-bee4-9fb610250047,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:56.200567 containerd[1979]: time="2025-08-13T07:19:56.200529996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f4dvw,Uid:78b6f36b-2fe6-4969-b860-8d05648c6f1d,Namespace:kube-system,Attempt:0,}" Aug 13 07:19:56.210020 containerd[1979]: time="2025-08-13T07:19:56.209965711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5696577845-qp9gz,Uid:8fda3464-db28-46a1-ac0f-62ebd7a09936,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:56.233181 containerd[1979]: time="2025-08-13T07:19:56.233139893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-bcs2t,Uid:55eb4daa-f610-43f7-8938-3afe0a026eea,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:19:56.267636 containerd[1979]: time="2025-08-13T07:19:56.267534762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-vph99,Uid:c765fc0f-d20b-48b8-a94e-e269e9f26813,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:19:56.647716 systemd[1]: Created slice kubepods-besteffort-pod4568b68e_bbde_4103_a6ee_c309f4a8bbce.slice - libcontainer container kubepods-besteffort-pod4568b68e_bbde_4103_a6ee_c309f4a8bbce.slice. Aug 13 07:19:56.652264 containerd[1979]: time="2025-08-13T07:19:56.652114521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xh28p,Uid:4568b68e-bbde-4103-a6ee-c309f4a8bbce,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:56.685096 containerd[1979]: time="2025-08-13T07:19:56.684888721Z" level=error msg="Failed to destroy network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.686665 containerd[1979]: time="2025-08-13T07:19:56.686532833Z" level=error msg="Failed to destroy network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.695943 containerd[1979]: time="2025-08-13T07:19:56.695886246Z" level=error msg="encountered an error cleaning up failed sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.696164 containerd[1979]: time="2025-08-13T07:19:56.696138267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5696577845-qp9gz,Uid:8fda3464-db28-46a1-ac0f-62ebd7a09936,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.696428 containerd[1979]: time="2025-08-13T07:19:56.696394423Z" level=error msg="encountered an error cleaning up failed sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.696553 containerd[1979]: time="2025-08-13T07:19:56.696525850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-vph99,Uid:c765fc0f-d20b-48b8-a94e-e269e9f26813,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.703421 containerd[1979]: time="2025-08-13T07:19:56.703370114Z" level=error msg="Failed to destroy network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.703959 containerd[1979]: time="2025-08-13T07:19:56.703922637Z" level=error msg="encountered an error cleaning up failed sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.704127 containerd[1979]: time="2025-08-13T07:19:56.704099010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f4dvw,Uid:78b6f36b-2fe6-4969-b860-8d05648c6f1d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.704726 containerd[1979]: time="2025-08-13T07:19:56.704311037Z" level=error msg="Failed to destroy network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.704726 containerd[1979]: time="2025-08-13T07:19:56.704593467Z" level=error msg="encountered an error cleaning up failed sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.704898 containerd[1979]: time="2025-08-13T07:19:56.704869789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-bcs2t,Uid:55eb4daa-f610-43f7-8938-3afe0a026eea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.706729 kubelet[3177]: E0813 07:19:56.705571 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.706729 kubelet[3177]: E0813 07:19:56.705661 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" Aug 13 07:19:56.706729 kubelet[3177]: E0813 07:19:56.705749 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.706729 kubelet[3177]: E0813 07:19:56.705808 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5696577845-qp9gz" Aug 13 07:19:56.707087 kubelet[3177]: E0813 07:19:56.705835 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5696577845-qp9gz" Aug 13 07:19:56.707087 kubelet[3177]: E0813 07:19:56.705899 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5696577845-qp9gz_calico-system(8fda3464-db28-46a1-ac0f-62ebd7a09936)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5696577845-qp9gz_calico-system(8fda3464-db28-46a1-ac0f-62ebd7a09936)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5696577845-qp9gz" podUID="8fda3464-db28-46a1-ac0f-62ebd7a09936" Aug 13 07:19:56.707087 kubelet[3177]: E0813 07:19:56.706756 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" Aug 13 07:19:56.708795 kubelet[3177]: E0813 07:19:56.708437 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.708795 kubelet[3177]: E0813 07:19:56.708481 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" Aug 13 07:19:56.708795 kubelet[3177]: E0813 07:19:56.708508 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" Aug 13 07:19:56.710431 kubelet[3177]: E0813 07:19:56.708556 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-745bf746c6-vph99_calico-apiserver(c765fc0f-d20b-48b8-a94e-e269e9f26813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-745bf746c6-vph99_calico-apiserver(c765fc0f-d20b-48b8-a94e-e269e9f26813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" podUID="c765fc0f-d20b-48b8-a94e-e269e9f26813" Aug 13 07:19:56.710431 kubelet[3177]: E0813 07:19:56.708606 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.710431 kubelet[3177]: E0813 07:19:56.708628 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f4dvw" Aug 13 07:19:56.710613 kubelet[3177]: E0813 07:19:56.708650 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f4dvw" Aug 13 07:19:56.710613 kubelet[3177]: E0813 07:19:56.708717 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-745bf746c6-bcs2t_calico-apiserver(55eb4daa-f610-43f7-8938-3afe0a026eea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-745bf746c6-bcs2t_calico-apiserver(55eb4daa-f610-43f7-8938-3afe0a026eea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" podUID="55eb4daa-f610-43f7-8938-3afe0a026eea" Aug 13 07:19:56.710613 kubelet[3177]: E0813 07:19:56.708799 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f4dvw_kube-system(78b6f36b-2fe6-4969-b860-8d05648c6f1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f4dvw_kube-system(78b6f36b-2fe6-4969-b860-8d05648c6f1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f4dvw" podUID="78b6f36b-2fe6-4969-b860-8d05648c6f1d" Aug 13 07:19:56.719353 containerd[1979]: time="2025-08-13T07:19:56.717883214Z" level=error msg="Failed to destroy network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.719353 containerd[1979]: time="2025-08-13T07:19:56.718366149Z" level=error msg="encountered an error cleaning up failed sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.719353 containerd[1979]: time="2025-08-13T07:19:56.718446177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7r7v,Uid:bd32d8f1-1e08-483e-bee4-9fb610250047,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.719599 kubelet[3177]: E0813 07:19:56.718682 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.719599 kubelet[3177]: E0813 07:19:56.718739 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-k7r7v" Aug 13 07:19:56.719599 kubelet[3177]: E0813 07:19:56.718764 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-k7r7v" Aug 13 07:19:56.719769 kubelet[3177]: E0813 07:19:56.718813 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-k7r7v_kube-system(bd32d8f1-1e08-483e-bee4-9fb610250047)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-k7r7v_kube-system(bd32d8f1-1e08-483e-bee4-9fb610250047)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-k7r7v" podUID="bd32d8f1-1e08-483e-bee4-9fb610250047" Aug 13 07:19:56.818327 containerd[1979]: time="2025-08-13T07:19:56.818283315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:19:56.819178 kubelet[3177]: I0813 07:19:56.819048 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:19:56.827445 containerd[1979]: time="2025-08-13T07:19:56.826063002Z" level=error msg="Failed to destroy network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.835689 containerd[1979]: time="2025-08-13T07:19:56.829480084Z" level=error msg="encountered an error cleaning up failed sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.835689 containerd[1979]: time="2025-08-13T07:19:56.833235203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xh28p,Uid:4568b68e-bbde-4103-a6ee-c309f4a8bbce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.835903 kubelet[3177]: E0813 07:19:56.833974 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.835903 kubelet[3177]: E0813 07:19:56.834082 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:56.835903 kubelet[3177]: E0813 07:19:56.834110 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xh28p" Aug 13 07:19:56.836048 kubelet[3177]: E0813 07:19:56.834162 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xh28p_calico-system(4568b68e-bbde-4103-a6ee-c309f4a8bbce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xh28p_calico-system(4568b68e-bbde-4103-a6ee-c309f4a8bbce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:56.836858 kubelet[3177]: I0813 07:19:56.836826 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:19:56.838542 containerd[1979]: time="2025-08-13T07:19:56.838390809Z" level=info msg="StopPodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\"" Aug 13 07:19:56.841082 containerd[1979]: time="2025-08-13T07:19:56.840751591Z" level=info msg="Ensure that sandbox e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc in task-service has been cleanup successfully" Aug 13 07:19:56.848217 kubelet[3177]: I0813 07:19:56.848180 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:19:56.849034 containerd[1979]: time="2025-08-13T07:19:56.848992902Z" level=info msg="StopPodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\"" Aug 13 07:19:56.849221 containerd[1979]: time="2025-08-13T07:19:56.849195758Z" level=info msg="Ensure that sandbox b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde in task-service has been cleanup successfully" Aug 13 07:19:56.851254 containerd[1979]: time="2025-08-13T07:19:56.851214629Z" level=info msg="StopPodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\"" Aug 13 07:19:56.851426 containerd[1979]: time="2025-08-13T07:19:56.851406682Z" level=info msg="Ensure that sandbox 905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff in task-service has been cleanup successfully" Aug 13 07:19:56.864886 kubelet[3177]: I0813 07:19:56.864856 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:19:56.866967 containerd[1979]: time="2025-08-13T07:19:56.866925345Z" level=info msg="StopPodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\"" Aug 13 07:19:56.868973 kubelet[3177]: I0813 07:19:56.868949 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:19:56.869605 containerd[1979]: time="2025-08-13T07:19:56.869486259Z" level=info msg="Ensure that sandbox b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14 in task-service has been cleanup successfully" Aug 13 07:19:56.871641 containerd[1979]: time="2025-08-13T07:19:56.871608203Z" level=info msg="StopPodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\"" Aug 13 07:19:56.871888 containerd[1979]: time="2025-08-13T07:19:56.871862617Z" level=info msg="Ensure that sandbox d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4 in task-service has been cleanup successfully" Aug 13 07:19:56.966753 containerd[1979]: time="2025-08-13T07:19:56.966598632Z" level=error msg="StopPodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" failed" error="failed to destroy network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.967163 kubelet[3177]: E0813 07:19:56.967121 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:19:56.982562 containerd[1979]: time="2025-08-13T07:19:56.982509226Z" level=error msg="StopPodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" failed" error="failed to destroy network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.983782 kubelet[3177]: E0813 07:19:56.983344 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:19:56.983782 kubelet[3177]: E0813 07:19:56.967186 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff"} Aug 13 07:19:56.983782 kubelet[3177]: E0813 07:19:56.983397 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde"} Aug 13 07:19:56.983782 kubelet[3177]: E0813 07:19:56.983564 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd32d8f1-1e08-483e-bee4-9fb610250047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:56.983782 kubelet[3177]: E0813 07:19:56.983590 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55eb4daa-f610-43f7-8938-3afe0a026eea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:56.984144 kubelet[3177]: E0813 07:19:56.983621 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55eb4daa-f610-43f7-8938-3afe0a026eea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" podUID="55eb4daa-f610-43f7-8938-3afe0a026eea" Aug 13 07:19:56.984144 kubelet[3177]: E0813 07:19:56.983738 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd32d8f1-1e08-483e-bee4-9fb610250047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-k7r7v" podUID="bd32d8f1-1e08-483e-bee4-9fb610250047" Aug 13 07:19:56.998400 containerd[1979]: time="2025-08-13T07:19:56.998340903Z" level=error msg="StopPodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" failed" error="failed to destroy network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.999044 containerd[1979]: time="2025-08-13T07:19:56.998623365Z" level=error msg="StopPodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" failed" error="failed to destroy network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.999044 containerd[1979]: time="2025-08-13T07:19:56.998932439Z" level=error msg="StopPodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" failed" error="failed to destroy network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:56.999153 kubelet[3177]: E0813 07:19:56.998574 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:19:56.999153 kubelet[3177]: E0813 07:19:56.998655 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc"} Aug 13 07:19:56.999153 kubelet[3177]: E0813 07:19:56.998763 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c765fc0f-d20b-48b8-a94e-e269e9f26813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:56.999153 kubelet[3177]: E0813 07:19:56.998803 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c765fc0f-d20b-48b8-a94e-e269e9f26813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" podUID="c765fc0f-d20b-48b8-a94e-e269e9f26813" Aug 13 07:19:56.999616 kubelet[3177]: E0813 07:19:56.999025 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:19:56.999616 kubelet[3177]: E0813 07:19:56.999111 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14"} Aug 13 07:19:56.999616 kubelet[3177]: E0813 07:19:56.999241 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fda3464-db28-46a1-ac0f-62ebd7a09936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:56.999616 kubelet[3177]: E0813 07:19:56.999277 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fda3464-db28-46a1-ac0f-62ebd7a09936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5696577845-qp9gz" podUID="8fda3464-db28-46a1-ac0f-62ebd7a09936" Aug 13 07:19:56.999876 kubelet[3177]: E0813 07:19:56.999252 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:19:56.999876 kubelet[3177]: E0813 07:19:56.999311 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4"} Aug 13 07:19:56.999876 kubelet[3177]: E0813 07:19:56.999341 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78b6f36b-2fe6-4969-b860-8d05648c6f1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:56.999876 kubelet[3177]: E0813 07:19:56.999380 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78b6f36b-2fe6-4969-b860-8d05648c6f1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f4dvw" podUID="78b6f36b-2fe6-4969-b860-8d05648c6f1d" Aug 13 07:19:57.108783 kubelet[3177]: E0813 07:19:57.108717 3177 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:19:57.108908 kubelet[3177]: E0813 07:19:57.108822 3177 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-ca-bundle podName:eec34e8b-75a2-4179-ad14-ba5b2ef8362a nodeName:}" failed. No retries permitted until 2025-08-13 07:19:57.608802198 +0000 UTC m=+32.137321934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-ca-bundle") pod "whisker-59f745bdfb-2h8n7" (UID: "eec34e8b-75a2-4179-ad14-ba5b2ef8362a") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:19:57.116665 kubelet[3177]: E0813 07:19:57.116066 3177 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:19:57.116665 kubelet[3177]: E0813 07:19:57.116132 3177 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc25405d-dcbb-4f86-bd65-6c51296b34d0-goldmane-ca-bundle podName:dc25405d-dcbb-4f86-bd65-6c51296b34d0 nodeName:}" failed. No retries permitted until 2025-08-13 07:19:57.616115347 +0000 UTC m=+32.144635083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/dc25405d-dcbb-4f86-bd65-6c51296b34d0-goldmane-ca-bundle") pod "goldmane-58fd7646b9-wq9m9" (UID: "dc25405d-dcbb-4f86-bd65-6c51296b34d0") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:19:57.120736 kubelet[3177]: E0813 07:19:57.120665 3177 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Aug 13 07:19:57.120881 kubelet[3177]: E0813 07:19:57.120783 3177 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-backend-key-pair podName:eec34e8b-75a2-4179-ad14-ba5b2ef8362a nodeName:}" failed. No retries permitted until 2025-08-13 07:19:57.620764119 +0000 UTC m=+32.149283856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-backend-key-pair") pod "whisker-59f745bdfb-2h8n7" (UID: "eec34e8b-75a2-4179-ad14-ba5b2ef8362a") : failed to sync secret cache: timed out waiting for the condition Aug 13 07:19:57.716642 containerd[1979]: time="2025-08-13T07:19:57.716314104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wq9m9,Uid:dc25405d-dcbb-4f86-bd65-6c51296b34d0,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:57.755906 containerd[1979]: time="2025-08-13T07:19:57.755690704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59f745bdfb-2h8n7,Uid:eec34e8b-75a2-4179-ad14-ba5b2ef8362a,Namespace:calico-system,Attempt:0,}" Aug 13 07:19:57.854605 containerd[1979]: time="2025-08-13T07:19:57.854546950Z" level=error msg="Failed to destroy network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.855182 containerd[1979]: time="2025-08-13T07:19:57.855040064Z" level=error msg="encountered an error cleaning up failed sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.855182 containerd[1979]: time="2025-08-13T07:19:57.855104311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wq9m9,Uid:dc25405d-dcbb-4f86-bd65-6c51296b34d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.855985 kubelet[3177]: E0813 07:19:57.855939 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.856738 kubelet[3177]: E0813 07:19:57.856493 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-wq9m9" Aug 13 07:19:57.856738 kubelet[3177]: E0813 07:19:57.856533 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-wq9m9" Aug 13 07:19:57.856738 kubelet[3177]: E0813 07:19:57.856616 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-wq9m9_calico-system(dc25405d-dcbb-4f86-bd65-6c51296b34d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-wq9m9_calico-system(dc25405d-dcbb-4f86-bd65-6c51296b34d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-wq9m9" podUID="dc25405d-dcbb-4f86-bd65-6c51296b34d0" Aug 13 07:19:57.874517 kubelet[3177]: I0813 07:19:57.873129 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:19:57.877700 containerd[1979]: time="2025-08-13T07:19:57.877426824Z" level=info msg="StopPodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\"" Aug 13 07:19:57.879111 containerd[1979]: time="2025-08-13T07:19:57.879060663Z" level=info msg="Ensure that sandbox e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3 in task-service has been cleanup successfully" Aug 13 07:19:57.885035 kubelet[3177]: I0813 07:19:57.883452 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:19:57.887223 containerd[1979]: time="2025-08-13T07:19:57.887185761Z" level=info msg="StopPodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\"" Aug 13 07:19:57.890934 containerd[1979]: time="2025-08-13T07:19:57.888911879Z" level=info msg="Ensure that sandbox 188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a in task-service has been cleanup successfully" Aug 13 07:19:57.907766 containerd[1979]: time="2025-08-13T07:19:57.907628850Z" level=error msg="Failed to destroy network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.910692 containerd[1979]: time="2025-08-13T07:19:57.908975938Z" level=error msg="encountered an error cleaning up failed sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.913481 containerd[1979]: time="2025-08-13T07:19:57.912872631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59f745bdfb-2h8n7,Uid:eec34e8b-75a2-4179-ad14-ba5b2ef8362a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.914907 kubelet[3177]: E0813 07:19:57.913111 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.914907 kubelet[3177]: E0813 07:19:57.913169 3177 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59f745bdfb-2h8n7" Aug 13 07:19:57.914907 kubelet[3177]: E0813 07:19:57.913197 3177 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59f745bdfb-2h8n7" Aug 13 07:19:57.915380 kubelet[3177]: E0813 07:19:57.913247 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59f745bdfb-2h8n7_calico-system(eec34e8b-75a2-4179-ad14-ba5b2ef8362a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59f745bdfb-2h8n7_calico-system(eec34e8b-75a2-4179-ad14-ba5b2ef8362a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59f745bdfb-2h8n7" podUID="eec34e8b-75a2-4179-ad14-ba5b2ef8362a" Aug 13 07:19:57.960818 containerd[1979]: time="2025-08-13T07:19:57.960658928Z" level=error msg="StopPodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" failed" error="failed to destroy network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.961019 kubelet[3177]: E0813 07:19:57.960968 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:19:57.961129 kubelet[3177]: E0813 07:19:57.961036 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3"} Aug 13 07:19:57.961129 kubelet[3177]: E0813 07:19:57.961080 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:57.961129 kubelet[3177]: E0813 07:19:57.961112 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4568b68e-bbde-4103-a6ee-c309f4a8bbce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xh28p" podUID="4568b68e-bbde-4103-a6ee-c309f4a8bbce" Aug 13 07:19:57.978336 containerd[1979]: time="2025-08-13T07:19:57.977704086Z" level=error msg="StopPodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" failed" error="failed to destroy network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:57.978463 kubelet[3177]: E0813 07:19:57.977978 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:19:57.978463 kubelet[3177]: E0813 07:19:57.978032 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a"} Aug 13 07:19:57.978463 kubelet[3177]: E0813 07:19:57.978078 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc25405d-dcbb-4f86-bd65-6c51296b34d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:57.978463 kubelet[3177]: E0813 07:19:57.978111 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc25405d-dcbb-4f86-bd65-6c51296b34d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-wq9m9" podUID="dc25405d-dcbb-4f86-bd65-6c51296b34d0" Aug 13 07:19:58.116831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b-shm.mount: Deactivated successfully. Aug 13 07:19:58.118870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a-shm.mount: Deactivated successfully. Aug 13 07:19:58.886442 kubelet[3177]: I0813 07:19:58.886404 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:19:58.887965 containerd[1979]: time="2025-08-13T07:19:58.887256050Z" level=info msg="StopPodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\"" Aug 13 07:19:58.887965 containerd[1979]: time="2025-08-13T07:19:58.887487659Z" level=info msg="Ensure that sandbox 6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b in task-service has been cleanup successfully" Aug 13 07:19:58.947001 containerd[1979]: time="2025-08-13T07:19:58.946649454Z" level=error msg="StopPodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" failed" error="failed to destroy network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:19:58.947158 kubelet[3177]: E0813 07:19:58.947006 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:19:58.947158 kubelet[3177]: E0813 07:19:58.947057 3177 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b"} Aug 13 07:19:58.947158 kubelet[3177]: E0813 07:19:58.947101 3177 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:19:58.947381 kubelet[3177]: E0813 07:19:58.947132 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59f745bdfb-2h8n7" podUID="eec34e8b-75a2-4179-ad14-ba5b2ef8362a" Aug 13 07:20:03.997367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238444322.mount: Deactivated successfully. Aug 13 07:20:04.069389 containerd[1979]: time="2025-08-13T07:20:04.068391443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.250051271s" Aug 13 07:20:04.069389 containerd[1979]: time="2025-08-13T07:20:04.068437592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:20:04.075565 containerd[1979]: time="2025-08-13T07:20:04.074391981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:20:04.081638 containerd[1979]: time="2025-08-13T07:20:04.081571289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:04.101411 containerd[1979]: time="2025-08-13T07:20:04.101373133Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:04.102267 containerd[1979]: time="2025-08-13T07:20:04.102227642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:04.148870 containerd[1979]: time="2025-08-13T07:20:04.148824379Z" level=info msg="CreateContainer within sandbox \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:20:04.191249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597364775.mount: Deactivated successfully. Aug 13 07:20:04.207145 containerd[1979]: time="2025-08-13T07:20:04.207083230Z" level=info msg="CreateContainer within sandbox \"e6b1351b5ced4ce0705a9d913710a1ff76df090c98dc3064224e4d37061e5dce\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137\"" Aug 13 07:20:04.212244 containerd[1979]: time="2025-08-13T07:20:04.212197403Z" level=info msg="StartContainer for \"1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137\"" Aug 13 07:20:04.311873 systemd[1]: Started cri-containerd-1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137.scope - libcontainer container 1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137. Aug 13 07:20:04.385521 containerd[1979]: time="2025-08-13T07:20:04.382535804Z" level=info msg="StartContainer for \"1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137\" returns successfully" Aug 13 07:20:04.773698 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:20:04.775070 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:20:05.298505 kubelet[3177]: I0813 07:20:05.285565 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pznrm" podStartSLOduration=1.878335315 podStartE2EDuration="18.256273963s" podCreationTimestamp="2025-08-13 07:19:47 +0000 UTC" firstStartedPulling="2025-08-13 07:19:47.697057501 +0000 UTC m=+22.225577246" lastFinishedPulling="2025-08-13 07:20:04.074996159 +0000 UTC m=+38.603515894" observedRunningTime="2025-08-13 07:20:05.009054375 +0000 UTC m=+39.537574130" watchObservedRunningTime="2025-08-13 07:20:05.256273963 +0000 UTC m=+39.784793717" Aug 13 07:20:05.307031 containerd[1979]: time="2025-08-13T07:20:05.306986709Z" level=info msg="StopPodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\"" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.465 [INFO][4600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.466 [INFO][4600] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" iface="eth0" netns="/var/run/netns/cni-fd6cd8da-4d0c-76b7-9c68-ade25741dc79" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.466 [INFO][4600] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" iface="eth0" netns="/var/run/netns/cni-fd6cd8da-4d0c-76b7-9c68-ade25741dc79" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.467 [INFO][4600] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" iface="eth0" netns="/var/run/netns/cni-fd6cd8da-4d0c-76b7-9c68-ade25741dc79" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.467 [INFO][4600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.467 [INFO][4600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.857 [INFO][4618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.861 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.863 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.878 [WARNING][4618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.878 [INFO][4618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.879 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:05.883709 containerd[1979]: 2025-08-13 07:20:05.881 [INFO][4600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:05.889360 systemd[1]: run-netns-cni\x2dfd6cd8da\x2d4d0c\x2d76b7\x2d9c68\x2dade25741dc79.mount: Deactivated successfully. Aug 13 07:20:05.891926 containerd[1979]: time="2025-08-13T07:20:05.891061411Z" level=info msg="TearDown network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" successfully" Aug 13 07:20:05.891926 containerd[1979]: time="2025-08-13T07:20:05.891096484Z" level=info msg="StopPodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" returns successfully" Aug 13 07:20:05.962073 systemd[1]: run-containerd-runc-k8s.io-1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137-runc.j4WTxD.mount: Deactivated successfully. Aug 13 07:20:06.007161 kubelet[3177]: I0813 07:20:06.007065 3177 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-backend-key-pair\") pod \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\" (UID: \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\") " Aug 13 07:20:06.007161 kubelet[3177]: I0813 07:20:06.007124 3177 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g47kx\" (UniqueName: \"kubernetes.io/projected/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-kube-api-access-g47kx\") pod \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\" (UID: \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\") " Aug 13 07:20:06.007161 kubelet[3177]: I0813 07:20:06.007156 3177 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-ca-bundle\") pod \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\" (UID: \"eec34e8b-75a2-4179-ad14-ba5b2ef8362a\") " Aug 13 07:20:06.034076 kubelet[3177]: I0813 07:20:06.031193 3177 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-kube-api-access-g47kx" (OuterVolumeSpecName: "kube-api-access-g47kx") pod "eec34e8b-75a2-4179-ad14-ba5b2ef8362a" (UID: "eec34e8b-75a2-4179-ad14-ba5b2ef8362a"). InnerVolumeSpecName "kube-api-access-g47kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:20:06.034076 kubelet[3177]: I0813 07:20:06.033614 3177 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eec34e8b-75a2-4179-ad14-ba5b2ef8362a" (UID: "eec34e8b-75a2-4179-ad14-ba5b2ef8362a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:20:06.036177 systemd[1]: var-lib-kubelet-pods-eec34e8b\x2d75a2\x2d4179\x2dad14\x2dba5b2ef8362a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:20:06.036315 systemd[1]: var-lib-kubelet-pods-eec34e8b\x2d75a2\x2d4179\x2dad14\x2dba5b2ef8362a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg47kx.mount: Deactivated successfully. Aug 13 07:20:06.037346 kubelet[3177]: I0813 07:20:06.037305 3177 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eec34e8b-75a2-4179-ad14-ba5b2ef8362a" (UID: "eec34e8b-75a2-4179-ad14-ba5b2ef8362a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:20:06.108474 kubelet[3177]: I0813 07:20:06.108009 3177 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-backend-key-pair\") on node \"ip-172-31-17-50\" DevicePath \"\"" Aug 13 07:20:06.108474 kubelet[3177]: I0813 07:20:06.108043 3177 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g47kx\" (UniqueName: \"kubernetes.io/projected/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-kube-api-access-g47kx\") on node \"ip-172-31-17-50\" DevicePath \"\"" Aug 13 07:20:06.108474 kubelet[3177]: I0813 07:20:06.108061 3177 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec34e8b-75a2-4179-ad14-ba5b2ef8362a-whisker-ca-bundle\") on node \"ip-172-31-17-50\" DevicePath \"\"" Aug 13 07:20:06.233815 systemd[1]: Removed slice kubepods-besteffort-podeec34e8b_75a2_4179_ad14_ba5b2ef8362a.slice - libcontainer container kubepods-besteffort-podeec34e8b_75a2_4179_ad14_ba5b2ef8362a.slice. Aug 13 07:20:06.364208 systemd[1]: Created slice kubepods-besteffort-podfc638859_a2b0_44a4_8d37_b1e4ed482df2.slice - libcontainer container kubepods-besteffort-podfc638859_a2b0_44a4_8d37_b1e4ed482df2.slice. Aug 13 07:20:06.510334 kubelet[3177]: I0813 07:20:06.510226 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx27x\" (UniqueName: \"kubernetes.io/projected/fc638859-a2b0-44a4-8d37-b1e4ed482df2-kube-api-access-xx27x\") pod \"whisker-785bf44c44-l2d2t\" (UID: \"fc638859-a2b0-44a4-8d37-b1e4ed482df2\") " pod="calico-system/whisker-785bf44c44-l2d2t" Aug 13 07:20:06.510334 kubelet[3177]: I0813 07:20:06.510275 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc638859-a2b0-44a4-8d37-b1e4ed482df2-whisker-backend-key-pair\") pod \"whisker-785bf44c44-l2d2t\" (UID: \"fc638859-a2b0-44a4-8d37-b1e4ed482df2\") " pod="calico-system/whisker-785bf44c44-l2d2t" Aug 13 07:20:06.510334 kubelet[3177]: I0813 07:20:06.510295 3177 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc638859-a2b0-44a4-8d37-b1e4ed482df2-whisker-ca-bundle\") pod \"whisker-785bf44c44-l2d2t\" (UID: \"fc638859-a2b0-44a4-8d37-b1e4ed482df2\") " pod="calico-system/whisker-785bf44c44-l2d2t" Aug 13 07:20:06.670942 containerd[1979]: time="2025-08-13T07:20:06.668917861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785bf44c44-l2d2t,Uid:fc638859-a2b0-44a4-8d37-b1e4ed482df2,Namespace:calico-system,Attempt:0,}" Aug 13 07:20:06.932708 (udev-worker)[4764]: Network interface NamePolicy= disabled on kernel command line. Aug 13 07:20:06.940930 systemd-networkd[1896]: cali35f4fda3662: Link UP Aug 13 07:20:06.941260 systemd-networkd[1896]: cali35f4fda3662: Gained carrier Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.802 [INFO][4743] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.820 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0 whisker-785bf44c44- calico-system fc638859-a2b0-44a4-8d37-b1e4ed482df2 926 0 2025-08-13 07:20:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:785bf44c44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-50 whisker-785bf44c44-l2d2t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali35f4fda3662 [] [] }} ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.820 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.850 [INFO][4757] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" HandleID="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Workload="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.852 [INFO][4757] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" HandleID="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Workload="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-50", "pod":"whisker-785bf44c44-l2d2t", "timestamp":"2025-08-13 07:20:06.850969665 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.852 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.852 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.852 [INFO][4757] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.864 [INFO][4757] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.885 [INFO][4757] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.891 [INFO][4757] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.893 [INFO][4757] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.896 [INFO][4757] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.896 [INFO][4757] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.899 [INFO][4757] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.904 [INFO][4757] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.912 [INFO][4757] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.1/26] block=192.168.78.0/26 handle="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.912 [INFO][4757] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.1/26] handle="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" host="ip-172-31-17-50" Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.912 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:06.960247 containerd[1979]: 2025-08-13 07:20:06.912 [INFO][4757] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.1/26] IPv6=[] ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" HandleID="k8s-pod-network.2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Workload="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:06.962228 containerd[1979]: 2025-08-13 07:20:06.915 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0", GenerateName:"whisker-785bf44c44-", Namespace:"calico-system", SelfLink:"", UID:"fc638859-a2b0-44a4-8d37-b1e4ed482df2", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"785bf44c44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"whisker-785bf44c44-l2d2t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.78.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35f4fda3662", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:06.962228 containerd[1979]: 2025-08-13 07:20:06.915 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.1/32] ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:06.962228 containerd[1979]: 2025-08-13 07:20:06.915 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35f4fda3662 ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:06.962228 containerd[1979]: 2025-08-13 07:20:06.931 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:06.962228 containerd[1979]: 2025-08-13 07:20:06.933 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0", GenerateName:"whisker-785bf44c44-", Namespace:"calico-system", SelfLink:"", UID:"fc638859-a2b0-44a4-8d37-b1e4ed482df2", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 20, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"785bf44c44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc", Pod:"whisker-785bf44c44-l2d2t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.78.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35f4fda3662", MAC:"c6:c3:fd:e5:5e:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:06.962228 containerd[1979]: 2025-08-13 07:20:06.953 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc" Namespace="calico-system" Pod="whisker-785bf44c44-l2d2t" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--785bf44c44--l2d2t-eth0" Aug 13 07:20:07.011196 containerd[1979]: time="2025-08-13T07:20:07.010897172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:07.011196 containerd[1979]: time="2025-08-13T07:20:07.010992494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:07.011196 containerd[1979]: time="2025-08-13T07:20:07.011016093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:07.011762 containerd[1979]: time="2025-08-13T07:20:07.011327753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:07.039752 systemd[1]: Started cri-containerd-2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc.scope - libcontainer container 2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc. Aug 13 07:20:07.161725 containerd[1979]: time="2025-08-13T07:20:07.161629598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785bf44c44-l2d2t,Uid:fc638859-a2b0-44a4-8d37-b1e4ed482df2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc\"" Aug 13 07:20:07.164048 containerd[1979]: time="2025-08-13T07:20:07.164017268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:20:07.644783 kubelet[3177]: I0813 07:20:07.644724 3177 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec34e8b-75a2-4179-ad14-ba5b2ef8362a" path="/var/lib/kubelet/pods/eec34e8b-75a2-4179-ad14-ba5b2ef8362a/volumes" Aug 13 07:20:07.854946 kubelet[3177]: I0813 07:20:07.854864 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:20:07.978925 systemd-networkd[1896]: cali35f4fda3662: Gained IPv6LL Aug 13 07:20:09.066707 kernel: bpftool[4902]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:20:09.147497 containerd[1979]: time="2025-08-13T07:20:09.146290046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:09.149602 containerd[1979]: time="2025-08-13T07:20:09.149523923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:20:09.152700 containerd[1979]: time="2025-08-13T07:20:09.151960695Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:09.160038 containerd[1979]: time="2025-08-13T07:20:09.159587902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:09.160839 containerd[1979]: time="2025-08-13T07:20:09.160796979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.996581217s" Aug 13 07:20:09.162554 containerd[1979]: time="2025-08-13T07:20:09.162525020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:20:09.172577 containerd[1979]: time="2025-08-13T07:20:09.172525908Z" level=info msg="CreateContainer within sandbox \"2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:20:09.197918 containerd[1979]: time="2025-08-13T07:20:09.197783357Z" level=info msg="CreateContainer within sandbox \"2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"186b49ce1228455e8d296b344ab3a535c57a26637184e35202d386a515b4e7a3\"" Aug 13 07:20:09.201962 containerd[1979]: time="2025-08-13T07:20:09.199228010Z" level=info msg="StartContainer for \"186b49ce1228455e8d296b344ab3a535c57a26637184e35202d386a515b4e7a3\"" Aug 13 07:20:09.255923 systemd[1]: Started cri-containerd-186b49ce1228455e8d296b344ab3a535c57a26637184e35202d386a515b4e7a3.scope - libcontainer container 186b49ce1228455e8d296b344ab3a535c57a26637184e35202d386a515b4e7a3. Aug 13 07:20:09.323015 containerd[1979]: time="2025-08-13T07:20:09.322858570Z" level=info msg="StartContainer for \"186b49ce1228455e8d296b344ab3a535c57a26637184e35202d386a515b4e7a3\" returns successfully" Aug 13 07:20:09.326348 containerd[1979]: time="2025-08-13T07:20:09.326202236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:20:09.452202 systemd-networkd[1896]: vxlan.calico: Link UP Aug 13 07:20:09.452210 systemd-networkd[1896]: vxlan.calico: Gained carrier Aug 13 07:20:09.473122 (udev-worker)[4556]: Network interface NamePolicy= disabled on kernel command line. Aug 13 07:20:10.093252 systemd[1]: Started sshd@7-172.31.17.50:22-147.75.109.163:38668.service - OpenSSH per-connection server daemon (147.75.109.163:38668). Aug 13 07:20:10.281874 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 38668 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:10.284047 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:10.289815 systemd-logind[1957]: New session 8 of user core. Aug 13 07:20:10.296898 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:20:10.633298 containerd[1979]: time="2025-08-13T07:20:10.632266925Z" level=info msg="StopPodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\"" Aug 13 07:20:10.633298 containerd[1979]: time="2025-08-13T07:20:10.632657055Z" level=info msg="StopPodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\"" Aug 13 07:20:10.633894 containerd[1979]: time="2025-08-13T07:20:10.633821169Z" level=info msg="StopPodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\"" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.738 [INFO][5048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.741 [INFO][5048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" iface="eth0" netns="/var/run/netns/cni-4a958ccd-4fc3-03ef-295a-a0a967fe6a42" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.742 [INFO][5048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" iface="eth0" netns="/var/run/netns/cni-4a958ccd-4fc3-03ef-295a-a0a967fe6a42" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.742 [INFO][5048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" iface="eth0" netns="/var/run/netns/cni-4a958ccd-4fc3-03ef-295a-a0a967fe6a42" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.744 [INFO][5048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.744 [INFO][5048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.824 [INFO][5068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.824 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.824 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.848 [WARNING][5068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.850 [INFO][5068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.858 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:10.872353 containerd[1979]: 2025-08-13 07:20:10.862 [INFO][5048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:10.872353 containerd[1979]: time="2025-08-13T07:20:10.872324103Z" level=info msg="TearDown network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" successfully" Aug 13 07:20:10.872353 containerd[1979]: time="2025-08-13T07:20:10.872356942Z" level=info msg="StopPodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" returns successfully" Aug 13 07:20:10.877191 containerd[1979]: time="2025-08-13T07:20:10.877059580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f4dvw,Uid:78b6f36b-2fe6-4969-b860-8d05648c6f1d,Namespace:kube-system,Attempt:1,}" Aug 13 07:20:10.882195 systemd[1]: run-netns-cni\x2d4a958ccd\x2d4fc3\x2d03ef\x2d295a\x2da0a967fe6a42.mount: Deactivated successfully. Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.798 [INFO][5059] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.801 [INFO][5059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" iface="eth0" netns="/var/run/netns/cni-d22e2f5c-7bd8-18ad-70e5-1596999a3f07" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.801 [INFO][5059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" iface="eth0" netns="/var/run/netns/cni-d22e2f5c-7bd8-18ad-70e5-1596999a3f07" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.803 [INFO][5059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" iface="eth0" netns="/var/run/netns/cni-d22e2f5c-7bd8-18ad-70e5-1596999a3f07" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.804 [INFO][5059] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.804 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.926 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.928 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.928 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.941 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.941 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.944 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:10.963905 containerd[1979]: 2025-08-13 07:20:10.951 [INFO][5059] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:10.978997 systemd[1]: run-netns-cni\x2dd22e2f5c\x2d7bd8\x2d18ad\x2d70e5\x2d1596999a3f07.mount: Deactivated successfully. Aug 13 07:20:11.012009 containerd[1979]: time="2025-08-13T07:20:10.974022336Z" level=info msg="TearDown network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" successfully" Aug 13 07:20:11.012009 containerd[1979]: time="2025-08-13T07:20:11.011973099Z" level=info msg="StopPodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" returns successfully" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.787 [INFO][5052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.787 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" iface="eth0" netns="/var/run/netns/cni-41243f8e-faa9-8366-e0d0-dc9201f92ec2" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.788 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" iface="eth0" netns="/var/run/netns/cni-41243f8e-faa9-8366-e0d0-dc9201f92ec2" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.806 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" iface="eth0" netns="/var/run/netns/cni-41243f8e-faa9-8366-e0d0-dc9201f92ec2" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.806 [INFO][5052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.806 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.940 [INFO][5079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.943 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.944 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.971 [WARNING][5079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.972 [INFO][5079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.983 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:11.012523 containerd[1979]: 2025-08-13 07:20:10.996 [INFO][5052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:11.016159 containerd[1979]: time="2025-08-13T07:20:11.012720324Z" level=info msg="TearDown network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" successfully" Aug 13 07:20:11.016159 containerd[1979]: time="2025-08-13T07:20:11.012744350Z" level=info msg="StopPodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" returns successfully" Aug 13 07:20:11.016159 containerd[1979]: time="2025-08-13T07:20:11.014539441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wq9m9,Uid:dc25405d-dcbb-4f86-bd65-6c51296b34d0,Namespace:calico-system,Attempt:1,}" Aug 13 07:20:11.021622 containerd[1979]: time="2025-08-13T07:20:11.017951500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xh28p,Uid:4568b68e-bbde-4103-a6ee-c309f4a8bbce,Namespace:calico-system,Attempt:1,}" Aug 13 07:20:11.021876 systemd[1]: run-netns-cni\x2d41243f8e\x2dfaa9\x2d8366\x2de0d0\x2ddc9201f92ec2.mount: Deactivated successfully. Aug 13 07:20:11.161505 sshd[5012]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:11.168142 systemd-logind[1957]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:20:11.169057 systemd[1]: sshd@7-172.31.17.50:22-147.75.109.163:38668.service: Deactivated successfully. Aug 13 07:20:11.173934 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:20:11.181743 systemd-logind[1957]: Removed session 8. Aug 13 07:20:11.352713 systemd-networkd[1896]: calif7db237a909: Link UP Aug 13 07:20:11.356768 systemd-networkd[1896]: calif7db237a909: Gained carrier Aug 13 07:20:11.357181 (udev-worker)[4968]: Network interface NamePolicy= disabled on kernel command line. Aug 13 07:20:11.372345 systemd-networkd[1896]: vxlan.calico: Gained IPv6LL Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.128 [INFO][5090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0 coredns-7c65d6cfc9- kube-system 78b6f36b-2fe6-4969-b860-8d05648c6f1d 981 0 2025-08-13 07:19:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-50 coredns-7c65d6cfc9-f4dvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7db237a909 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.134 [INFO][5090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.250 [INFO][5119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" HandleID="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.251 [INFO][5119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" HandleID="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae180), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-50", "pod":"coredns-7c65d6cfc9-f4dvw", "timestamp":"2025-08-13 07:20:11.250932203 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.251 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.251 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.251 [INFO][5119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.265 [INFO][5119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.285 [INFO][5119] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.294 [INFO][5119] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.300 [INFO][5119] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.303 [INFO][5119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.303 [INFO][5119] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.306 [INFO][5119] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3 Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.315 [INFO][5119] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.340 [INFO][5119] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.2/26] block=192.168.78.0/26 handle="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.340 [INFO][5119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.2/26] handle="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" host="ip-172-31-17-50" Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.340 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:11.414112 containerd[1979]: 2025-08-13 07:20:11.341 [INFO][5119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.2/26] IPv6=[] ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" HandleID="k8s-pod-network.c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.415635 containerd[1979]: 2025-08-13 07:20:11.347 [INFO][5090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78b6f36b-2fe6-4969-b860-8d05648c6f1d", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"coredns-7c65d6cfc9-f4dvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7db237a909", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:11.415635 containerd[1979]: 2025-08-13 07:20:11.348 [INFO][5090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.2/32] ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.415635 containerd[1979]: 2025-08-13 07:20:11.348 [INFO][5090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7db237a909 ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.415635 containerd[1979]: 2025-08-13 07:20:11.357 [INFO][5090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.415635 containerd[1979]: 2025-08-13 07:20:11.358 [INFO][5090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78b6f36b-2fe6-4969-b860-8d05648c6f1d", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3", Pod:"coredns-7c65d6cfc9-f4dvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7db237a909", MAC:"06:03:fb:eb:0d:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:11.415635 containerd[1979]: 2025-08-13 07:20:11.394 [INFO][5090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f4dvw" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:11.510905 containerd[1979]: time="2025-08-13T07:20:11.500270490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:11.510905 containerd[1979]: time="2025-08-13T07:20:11.500584325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:11.510905 containerd[1979]: time="2025-08-13T07:20:11.500719281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:11.510905 containerd[1979]: time="2025-08-13T07:20:11.501635737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:11.528213 systemd-networkd[1896]: calie37f4c9d927: Link UP Aug 13 07:20:11.528516 systemd-networkd[1896]: calie37f4c9d927: Gained carrier Aug 13 07:20:11.555945 systemd[1]: Started cri-containerd-c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3.scope - libcontainer container c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3. Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.261 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0 goldmane-58fd7646b9- calico-system dc25405d-dcbb-4f86-bd65-6c51296b34d0 983 0 2025-08-13 07:19:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-50 goldmane-58fd7646b9-wq9m9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie37f4c9d927 [] [] }} ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.261 [INFO][5104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.427 [INFO][5141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" HandleID="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.427 [INFO][5141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" HandleID="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-50", "pod":"goldmane-58fd7646b9-wq9m9", "timestamp":"2025-08-13 07:20:11.427047699 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.427 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.427 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.427 [INFO][5141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.440 [INFO][5141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.460 [INFO][5141] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.474 [INFO][5141] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.479 [INFO][5141] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.484 [INFO][5141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.485 [INFO][5141] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.488 [INFO][5141] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8 Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.497 [INFO][5141] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.510 [INFO][5141] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.3/26] block=192.168.78.0/26 handle="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.510 [INFO][5141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.3/26] handle="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" host="ip-172-31-17-50" Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.510 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:11.565328 containerd[1979]: 2025-08-13 07:20:11.510 [INFO][5141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.3/26] IPv6=[] ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" HandleID="k8s-pod-network.c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.566389 containerd[1979]: 2025-08-13 07:20:11.514 [INFO][5104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"dc25405d-dcbb-4f86-bd65-6c51296b34d0", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"goldmane-58fd7646b9-wq9m9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie37f4c9d927", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:11.566389 containerd[1979]: 2025-08-13 07:20:11.515 [INFO][5104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.3/32] ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.566389 containerd[1979]: 2025-08-13 07:20:11.515 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie37f4c9d927 ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.566389 containerd[1979]: 2025-08-13 07:20:11.531 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.566389 containerd[1979]: 2025-08-13 07:20:11.532 [INFO][5104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"dc25405d-dcbb-4f86-bd65-6c51296b34d0", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8", Pod:"goldmane-58fd7646b9-wq9m9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie37f4c9d927", MAC:"3e:1d:6b:c0:fe:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:11.566389 containerd[1979]: 2025-08-13 07:20:11.555 [INFO][5104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8" Namespace="calico-system" Pod="goldmane-58fd7646b9-wq9m9" WorkloadEndpoint="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:11.637687 containerd[1979]: time="2025-08-13T07:20:11.634235298Z" level=info msg="StopPodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\"" Aug 13 07:20:11.653648 containerd[1979]: time="2025-08-13T07:20:11.653038179Z" level=info msg="StopPodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\"" Aug 13 07:20:11.698083 containerd[1979]: time="2025-08-13T07:20:11.694897659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:11.698083 containerd[1979]: time="2025-08-13T07:20:11.694980097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:11.698083 containerd[1979]: time="2025-08-13T07:20:11.695002936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:11.698083 containerd[1979]: time="2025-08-13T07:20:11.695109886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:11.730293 systemd-networkd[1896]: califeeae24646f: Link UP Aug 13 07:20:11.730652 systemd-networkd[1896]: califeeae24646f: Gained carrier Aug 13 07:20:11.770918 systemd[1]: Started cri-containerd-c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8.scope - libcontainer container c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8. Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.289 [INFO][5121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0 csi-node-driver- calico-system 4568b68e-bbde-4103-a6ee-c309f4a8bbce 982 0 2025-08-13 07:19:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-50 csi-node-driver-xh28p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califeeae24646f [] [] }} ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.289 [INFO][5121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.472 [INFO][5146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" HandleID="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.473 [INFO][5146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" HandleID="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000392250), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-50", "pod":"csi-node-driver-xh28p", "timestamp":"2025-08-13 07:20:11.472463762 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.474 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.510 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.510 [INFO][5146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.548 [INFO][5146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.573 [INFO][5146] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.583 [INFO][5146] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.589 [INFO][5146] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.595 [INFO][5146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.595 [INFO][5146] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.606 [INFO][5146] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76 Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.627 [INFO][5146] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.669 [INFO][5146] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.4/26] block=192.168.78.0/26 handle="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.670 [INFO][5146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.4/26] handle="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" host="ip-172-31-17-50" Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.670 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:11.827318 containerd[1979]: 2025-08-13 07:20:11.670 [INFO][5146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.4/26] IPv6=[] ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" HandleID="k8s-pod-network.036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.828820 containerd[1979]: 2025-08-13 07:20:11.692 [INFO][5121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568b68e-bbde-4103-a6ee-c309f4a8bbce", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"csi-node-driver-xh28p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeeae24646f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:11.828820 containerd[1979]: 2025-08-13 07:20:11.696 [INFO][5121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.4/32] ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.828820 containerd[1979]: 2025-08-13 07:20:11.696 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califeeae24646f ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.828820 containerd[1979]: 2025-08-13 07:20:11.726 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.828820 containerd[1979]: 2025-08-13 07:20:11.729 [INFO][5121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568b68e-bbde-4103-a6ee-c309f4a8bbce", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76", Pod:"csi-node-driver-xh28p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeeae24646f", MAC:"1e:c5:63:cc:5c:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:11.828820 containerd[1979]: 2025-08-13 07:20:11.813 [INFO][5121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76" Namespace="calico-system" Pod="csi-node-driver-xh28p" WorkloadEndpoint="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:11.896984 containerd[1979]: time="2025-08-13T07:20:11.896945966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f4dvw,Uid:78b6f36b-2fe6-4969-b860-8d05648c6f1d,Namespace:kube-system,Attempt:1,} returns sandbox id \"c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3\"" Aug 13 07:20:11.910581 containerd[1979]: time="2025-08-13T07:20:11.910540351Z" level=info msg="CreateContainer within sandbox \"c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:20:11.972531 containerd[1979]: time="2025-08-13T07:20:11.968629195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:11.982143 containerd[1979]: time="2025-08-13T07:20:11.976287040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:11.982143 containerd[1979]: time="2025-08-13T07:20:11.976791218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:11.982143 containerd[1979]: time="2025-08-13T07:20:11.977836068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:11.988021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617623207.mount: Deactivated successfully. Aug 13 07:20:12.003350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491108004.mount: Deactivated successfully. Aug 13 07:20:12.029964 containerd[1979]: time="2025-08-13T07:20:12.029921441Z" level=info msg="CreateContainer within sandbox \"c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ee0726ea18c3a3bbd449361d3848a05a45b2120d227edeb2744d3e44a909e7c\"" Aug 13 07:20:12.035318 containerd[1979]: time="2025-08-13T07:20:12.035177922Z" level=info msg="StartContainer for \"4ee0726ea18c3a3bbd449361d3848a05a45b2120d227edeb2744d3e44a909e7c\"" Aug 13 07:20:12.084324 systemd[1]: Started cri-containerd-036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76.scope - libcontainer container 036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76. Aug 13 07:20:12.133930 containerd[1979]: time="2025-08-13T07:20:12.133885171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wq9m9,Uid:dc25405d-dcbb-4f86-bd65-6c51296b34d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8\"" Aug 13 07:20:12.197568 systemd[1]: Started cri-containerd-4ee0726ea18c3a3bbd449361d3848a05a45b2120d227edeb2744d3e44a909e7c.scope - libcontainer container 4ee0726ea18c3a3bbd449361d3848a05a45b2120d227edeb2744d3e44a909e7c. Aug 13 07:20:12.203865 containerd[1979]: time="2025-08-13T07:20:12.203822436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xh28p,Uid:4568b68e-bbde-4103-a6ee-c309f4a8bbce,Namespace:calico-system,Attempt:1,} returns sandbox id \"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76\"" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.063 [INFO][5242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.063 [INFO][5242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" iface="eth0" netns="/var/run/netns/cni-4d0f4990-b367-2106-0440-fca299cfe047" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.067 [INFO][5242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" iface="eth0" netns="/var/run/netns/cni-4d0f4990-b367-2106-0440-fca299cfe047" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" iface="eth0" netns="/var/run/netns/cni-4d0f4990-b367-2106-0440-fca299cfe047" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.244 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.244 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.245 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.259 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.259 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.263 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:12.295882 containerd[1979]: 2025-08-13 07:20:12.283 [INFO][5242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:12.295882 containerd[1979]: time="2025-08-13T07:20:12.295595378Z" level=info msg="TearDown network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" successfully" Aug 13 07:20:12.295882 containerd[1979]: time="2025-08-13T07:20:12.295632327Z" level=info msg="StopPodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" returns successfully" Aug 13 07:20:12.301612 containerd[1979]: time="2025-08-13T07:20:12.300317201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5696577845-qp9gz,Uid:8fda3464-db28-46a1-ac0f-62ebd7a09936,Namespace:calico-system,Attempt:1,}" Aug 13 07:20:12.302335 containerd[1979]: time="2025-08-13T07:20:12.302304736Z" level=info msg="StartContainer for \"4ee0726ea18c3a3bbd449361d3848a05a45b2120d227edeb2744d3e44a909e7c\" returns successfully" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.070 [INFO][5247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.070 [INFO][5247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" iface="eth0" netns="/var/run/netns/cni-b406b7fa-2a5e-179e-ec9c-38f2b444c117" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" iface="eth0" netns="/var/run/netns/cni-b406b7fa-2a5e-179e-ec9c-38f2b444c117" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5247] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" iface="eth0" netns="/var/run/netns/cni-b406b7fa-2a5e-179e-ec9c-38f2b444c117" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.071 [INFO][5247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.264 [INFO][5332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.265 [INFO][5332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.266 [INFO][5332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.286 [WARNING][5332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.286 [INFO][5332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.290 [INFO][5332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:12.311230 containerd[1979]: 2025-08-13 07:20:12.294 [INFO][5247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:12.312070 containerd[1979]: time="2025-08-13T07:20:12.312036575Z" level=info msg="TearDown network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" successfully" Aug 13 07:20:12.312269 containerd[1979]: time="2025-08-13T07:20:12.312161937Z" level=info msg="StopPodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" returns successfully" Aug 13 07:20:12.313402 containerd[1979]: time="2025-08-13T07:20:12.313365257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-bcs2t,Uid:55eb4daa-f610-43f7-8938-3afe0a026eea,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:20:12.636865 containerd[1979]: time="2025-08-13T07:20:12.636711942Z" level=info msg="StopPodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\"" Aug 13 07:20:12.639155 containerd[1979]: time="2025-08-13T07:20:12.639114423Z" level=info msg="StopPodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\"" Aug 13 07:20:12.745576 systemd-networkd[1896]: cali44bd3786738: Link UP Aug 13 07:20:12.747755 systemd-networkd[1896]: cali44bd3786738: Gained carrier Aug 13 07:20:12.778959 systemd-networkd[1896]: calif7db237a909: Gained IPv6LL Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.472 [INFO][5388] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0 calico-kube-controllers-5696577845- calico-system 8fda3464-db28-46a1-ac0f-62ebd7a09936 1002 0 2025-08-13 07:19:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5696577845 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-50 calico-kube-controllers-5696577845-qp9gz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali44bd3786738 [] [] }} ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.473 [INFO][5388] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.583 [INFO][5413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" HandleID="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.589 [INFO][5413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" HandleID="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-50", "pod":"calico-kube-controllers-5696577845-qp9gz", "timestamp":"2025-08-13 07:20:12.583826964 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.590 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.590 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.590 [INFO][5413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.609 [INFO][5413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.619 [INFO][5413] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.651 [INFO][5413] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.660 [INFO][5413] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.668 [INFO][5413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.668 [INFO][5413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.673 [INFO][5413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05 Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.685 [INFO][5413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.705 [INFO][5413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.5/26] block=192.168.78.0/26 handle="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.705 [INFO][5413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.5/26] handle="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" host="ip-172-31-17-50" Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.705 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:12.814766 containerd[1979]: 2025-08-13 07:20:12.705 [INFO][5413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.5/26] IPv6=[] ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" HandleID="k8s-pod-network.bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.816513 containerd[1979]: 2025-08-13 07:20:12.714 [INFO][5388] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0", GenerateName:"calico-kube-controllers-5696577845-", Namespace:"calico-system", SelfLink:"", UID:"8fda3464-db28-46a1-ac0f-62ebd7a09936", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5696577845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"calico-kube-controllers-5696577845-qp9gz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44bd3786738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:12.816513 containerd[1979]: 2025-08-13 07:20:12.715 [INFO][5388] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.5/32] ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.816513 containerd[1979]: 2025-08-13 07:20:12.716 [INFO][5388] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44bd3786738 ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.816513 containerd[1979]: 2025-08-13 07:20:12.753 [INFO][5388] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.816513 containerd[1979]: 2025-08-13 07:20:12.755 [INFO][5388] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0", GenerateName:"calico-kube-controllers-5696577845-", Namespace:"calico-system", SelfLink:"", UID:"8fda3464-db28-46a1-ac0f-62ebd7a09936", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5696577845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05", Pod:"calico-kube-controllers-5696577845-qp9gz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44bd3786738", MAC:"b6:6e:64:cc:fd:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:12.816513 containerd[1979]: 2025-08-13 07:20:12.798 [INFO][5388] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05" Namespace="calico-system" Pod="calico-kube-controllers-5696577845-qp9gz" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:12.890808 systemd[1]: run-netns-cni\x2db406b7fa\x2d2a5e\x2d179e\x2dec9c\x2d38f2b444c117.mount: Deactivated successfully. Aug 13 07:20:12.891740 systemd[1]: run-netns-cni\x2d4d0f4990\x2db367\x2d2106\x2d0440\x2dfca299cfe047.mount: Deactivated successfully. Aug 13 07:20:12.973479 systemd-networkd[1896]: cali72f6de35128: Link UP Aug 13 07:20:12.973846 systemd-networkd[1896]: cali72f6de35128: Gained carrier Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.573 [INFO][5400] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0 calico-apiserver-745bf746c6- calico-apiserver 55eb4daa-f610-43f7-8938-3afe0a026eea 1003 0 2025-08-13 07:19:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745bf746c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-50 calico-apiserver-745bf746c6-bcs2t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72f6de35128 [] [] }} ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.573 [INFO][5400] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.735 [INFO][5420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" HandleID="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.737 [INFO][5420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" HandleID="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-50", "pod":"calico-apiserver-745bf746c6-bcs2t", "timestamp":"2025-08-13 07:20:12.735105957 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.739 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.740 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.740 [INFO][5420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.784 [INFO][5420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.810 [INFO][5420] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.833 [INFO][5420] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.840 [INFO][5420] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.848 [INFO][5420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.848 [INFO][5420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.855 [INFO][5420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85 Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.898 [INFO][5420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.944 [INFO][5420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.6/26] block=192.168.78.0/26 handle="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.944 [INFO][5420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.6/26] handle="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" host="ip-172-31-17-50" Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.944 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:13.051753 containerd[1979]: 2025-08-13 07:20:12.944 [INFO][5420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.6/26] IPv6=[] ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" HandleID="k8s-pod-network.16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.055144 containerd[1979]: 2025-08-13 07:20:12.960 [INFO][5400] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"55eb4daa-f610-43f7-8938-3afe0a026eea", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"calico-apiserver-745bf746c6-bcs2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f6de35128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:13.055144 containerd[1979]: 2025-08-13 07:20:12.967 [INFO][5400] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.6/32] ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.055144 containerd[1979]: 2025-08-13 07:20:12.967 [INFO][5400] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72f6de35128 ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.055144 containerd[1979]: 2025-08-13 07:20:12.972 [INFO][5400] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.055144 containerd[1979]: 2025-08-13 07:20:12.982 [INFO][5400] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"55eb4daa-f610-43f7-8938-3afe0a026eea", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85", Pod:"calico-apiserver-745bf746c6-bcs2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f6de35128", MAC:"be:be:ed:c4:75:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:13.055144 containerd[1979]: 2025-08-13 07:20:13.045 [INFO][5400] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-bcs2t" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:13.101591 containerd[1979]: time="2025-08-13T07:20:13.095855358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:13.101591 containerd[1979]: time="2025-08-13T07:20:13.095982022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:13.101591 containerd[1979]: time="2025-08-13T07:20:13.096040354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:13.101591 containerd[1979]: time="2025-08-13T07:20:13.096158166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.028 [INFO][5449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.028 [INFO][5449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" iface="eth0" netns="/var/run/netns/cni-c84b420e-7b4a-b802-cabd-ee7dfed3dd4b" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.029 [INFO][5449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" iface="eth0" netns="/var/run/netns/cni-c84b420e-7b4a-b802-cabd-ee7dfed3dd4b" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.034 [INFO][5449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" iface="eth0" netns="/var/run/netns/cni-c84b420e-7b4a-b802-cabd-ee7dfed3dd4b" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.035 [INFO][5449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.035 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.153 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.154 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.154 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.181 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.182 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.188 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:13.240715 containerd[1979]: 2025-08-13 07:20:13.207 [INFO][5449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:13.244306 containerd[1979]: time="2025-08-13T07:20:13.242921679Z" level=info msg="TearDown network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" successfully" Aug 13 07:20:13.244306 containerd[1979]: time="2025-08-13T07:20:13.242984199Z" level=info msg="StopPodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" returns successfully" Aug 13 07:20:13.244656 systemd[1]: run-netns-cni\x2dc84b420e\x2d7b4a\x2db802\x2dcabd\x2dee7dfed3dd4b.mount: Deactivated successfully. Aug 13 07:20:13.250585 containerd[1979]: time="2025-08-13T07:20:13.249977775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-vph99,Uid:c765fc0f-d20b-48b8-a94e-e269e9f26813,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:20:13.255986 kubelet[3177]: I0813 07:20:13.254531 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f4dvw" podStartSLOduration=43.254501092 podStartE2EDuration="43.254501092s" podCreationTimestamp="2025-08-13 07:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:13.223304023 +0000 UTC m=+47.751823779" watchObservedRunningTime="2025-08-13 07:20:13.254501092 +0000 UTC m=+47.783020877" Aug 13 07:20:13.262461 containerd[1979]: time="2025-08-13T07:20:13.251477470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:13.262461 containerd[1979]: time="2025-08-13T07:20:13.251566412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:13.262461 containerd[1979]: time="2025-08-13T07:20:13.251587302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:13.262461 containerd[1979]: time="2025-08-13T07:20:13.252330721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:13.326067 systemd[1]: Started cri-containerd-16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85.scope - libcontainer container 16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85. Aug 13 07:20:13.338229 systemd[1]: Started cri-containerd-bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05.scope - libcontainer container bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05. Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:12.995 [INFO][5442] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:12.995 [INFO][5442] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" iface="eth0" netns="/var/run/netns/cni-e70220e7-0d11-c07a-b730-73f3100330fd" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:12.995 [INFO][5442] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" iface="eth0" netns="/var/run/netns/cni-e70220e7-0d11-c07a-b730-73f3100330fd" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:12.995 [INFO][5442] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" iface="eth0" netns="/var/run/netns/cni-e70220e7-0d11-c07a-b730-73f3100330fd" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:12.996 [INFO][5442] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:12.997 [INFO][5442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.269 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.283 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.285 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.343 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.345 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.354 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:13.371929 containerd[1979]: 2025-08-13 07:20:13.368 [INFO][5442] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:13.392877 containerd[1979]: time="2025-08-13T07:20:13.392686674Z" level=info msg="TearDown network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" successfully" Aug 13 07:20:13.392877 containerd[1979]: time="2025-08-13T07:20:13.392736925Z" level=info msg="StopPodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" returns successfully" Aug 13 07:20:13.395953 containerd[1979]: time="2025-08-13T07:20:13.395903561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7r7v,Uid:bd32d8f1-1e08-483e-bee4-9fb610250047,Namespace:kube-system,Attempt:1,}" Aug 13 07:20:13.418945 systemd-networkd[1896]: calie37f4c9d927: Gained IPv6LL Aug 13 07:20:13.674966 systemd-networkd[1896]: califeeae24646f: Gained IPv6LL Aug 13 07:20:13.737870 systemd-networkd[1896]: calibed2aa8a47f: Link UP Aug 13 07:20:13.741997 systemd-networkd[1896]: calibed2aa8a47f: Gained carrier Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.495 [INFO][5551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0 calico-apiserver-745bf746c6- calico-apiserver c765fc0f-d20b-48b8-a94e-e269e9f26813 1020 0 2025-08-13 07:19:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745bf746c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-50 calico-apiserver-745bf746c6-vph99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibed2aa8a47f [] [] }} ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.495 [INFO][5551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.587 [INFO][5590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" HandleID="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.587 [INFO][5590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" HandleID="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-50", "pod":"calico-apiserver-745bf746c6-vph99", "timestamp":"2025-08-13 07:20:13.586241607 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.587 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.587 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.587 [INFO][5590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.607 [INFO][5590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.631 [INFO][5590] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.642 [INFO][5590] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.650 [INFO][5590] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.656 [INFO][5590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.657 [INFO][5590] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.662 [INFO][5590] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.677 [INFO][5590] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.696 [INFO][5590] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.7/26] block=192.168.78.0/26 handle="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.698 [INFO][5590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.7/26] handle="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" host="ip-172-31-17-50" Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.699 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:13.803497 containerd[1979]: 2025-08-13 07:20:13.699 [INFO][5590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.7/26] IPv6=[] ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" HandleID="k8s-pod-network.77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.807046 containerd[1979]: 2025-08-13 07:20:13.715 [INFO][5551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c765fc0f-d20b-48b8-a94e-e269e9f26813", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"calico-apiserver-745bf746c6-vph99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed2aa8a47f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:13.807046 containerd[1979]: 2025-08-13 07:20:13.716 [INFO][5551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.7/32] ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.807046 containerd[1979]: 2025-08-13 07:20:13.716 [INFO][5551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibed2aa8a47f ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.807046 containerd[1979]: 2025-08-13 07:20:13.751 [INFO][5551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.807046 containerd[1979]: 2025-08-13 07:20:13.754 [INFO][5551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c765fc0f-d20b-48b8-a94e-e269e9f26813", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b", Pod:"calico-apiserver-745bf746c6-vph99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed2aa8a47f", MAC:"fa:08:9f:07:dc:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:13.807046 containerd[1979]: 2025-08-13 07:20:13.777 [INFO][5551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b" Namespace="calico-apiserver" Pod="calico-apiserver-745bf746c6-vph99" WorkloadEndpoint="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:13.869205 systemd-networkd[1896]: cali44bd3786738: Gained IPv6LL Aug 13 07:20:13.887614 systemd[1]: run-containerd-runc-k8s.io-16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85-runc.IKYOcr.mount: Deactivated successfully. Aug 13 07:20:13.889600 systemd[1]: run-netns-cni\x2de70220e7\x2d0d11\x2dc07a\x2db730\x2d73f3100330fd.mount: Deactivated successfully. Aug 13 07:20:13.951688 containerd[1979]: time="2025-08-13T07:20:13.950772275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:13.951688 containerd[1979]: time="2025-08-13T07:20:13.950856842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:13.951688 containerd[1979]: time="2025-08-13T07:20:13.950880128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:13.951688 containerd[1979]: time="2025-08-13T07:20:13.951014637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:13.988845 containerd[1979]: time="2025-08-13T07:20:13.988799134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-bcs2t,Uid:55eb4daa-f610-43f7-8938-3afe0a026eea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85\"" Aug 13 07:20:13.992074 containerd[1979]: time="2025-08-13T07:20:13.992032299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5696577845-qp9gz,Uid:8fda3464-db28-46a1-ac0f-62ebd7a09936,Namespace:calico-system,Attempt:1,} returns sandbox id \"bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05\"" Aug 13 07:20:14.036869 systemd[1]: Started cri-containerd-77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b.scope - libcontainer container 77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b. Aug 13 07:20:14.041454 systemd-networkd[1896]: cali42a3971f2b7: Link UP Aug 13 07:20:14.043914 systemd-networkd[1896]: cali42a3971f2b7: Gained carrier Aug 13 07:20:14.059398 systemd-networkd[1896]: cali72f6de35128: Gained IPv6LL Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.672 [INFO][5579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0 coredns-7c65d6cfc9- kube-system bd32d8f1-1e08-483e-bee4-9fb610250047 1018 0 2025-08-13 07:19:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-50 coredns-7c65d6cfc9-k7r7v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali42a3971f2b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.672 [INFO][5579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.847 [INFO][5604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" HandleID="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.848 [INFO][5604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" HandleID="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103f60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-50", "pod":"coredns-7c65d6cfc9-k7r7v", "timestamp":"2025-08-13 07:20:13.847826014 +0000 UTC"}, Hostname:"ip-172-31-17-50", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.848 [INFO][5604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.848 [INFO][5604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.848 [INFO][5604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-50' Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.889 [INFO][5604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.911 [INFO][5604] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.941 [INFO][5604] ipam/ipam.go 511: Trying affinity for 192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.950 [INFO][5604] ipam/ipam.go 158: Attempting to load block cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.961 [INFO][5604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.78.0/26 host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.961 [INFO][5604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.78.0/26 handle="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.982 [INFO][5604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:13.998 [INFO][5604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.78.0/26 handle="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:14.028 [INFO][5604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.78.8/26] block=192.168.78.0/26 handle="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:14.028 [INFO][5604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.78.8/26] handle="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" host="ip-172-31-17-50" Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:14.028 [INFO][5604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:14.087795 containerd[1979]: 2025-08-13 07:20:14.028 [INFO][5604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.78.8/26] IPv6=[] ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" HandleID="k8s-pod-network.fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.090755 containerd[1979]: 2025-08-13 07:20:14.034 [INFO][5579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bd32d8f1-1e08-483e-bee4-9fb610250047", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"", Pod:"coredns-7c65d6cfc9-k7r7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a3971f2b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:14.090755 containerd[1979]: 2025-08-13 07:20:14.034 [INFO][5579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.78.8/32] ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.090755 containerd[1979]: 2025-08-13 07:20:14.034 [INFO][5579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42a3971f2b7 ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.090755 containerd[1979]: 2025-08-13 07:20:14.048 [INFO][5579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.090755 containerd[1979]: 2025-08-13 07:20:14.051 [INFO][5579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bd32d8f1-1e08-483e-bee4-9fb610250047", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe", Pod:"coredns-7c65d6cfc9-k7r7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a3971f2b7", MAC:"fa:ec:5c:14:a8:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:14.090755 containerd[1979]: 2025-08-13 07:20:14.081 [INFO][5579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-k7r7v" WorkloadEndpoint="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:14.163693 containerd[1979]: time="2025-08-13T07:20:14.163231810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:20:14.163693 containerd[1979]: time="2025-08-13T07:20:14.163314614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:20:14.163693 containerd[1979]: time="2025-08-13T07:20:14.163338623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:14.163693 containerd[1979]: time="2025-08-13T07:20:14.163462210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:20:14.217916 systemd[1]: Started cri-containerd-fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe.scope - libcontainer container fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe. Aug 13 07:20:14.251586 containerd[1979]: time="2025-08-13T07:20:14.251370046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745bf746c6-vph99,Uid:c765fc0f-d20b-48b8-a94e-e269e9f26813,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b\"" Aug 13 07:20:14.331791 containerd[1979]: time="2025-08-13T07:20:14.331733824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-k7r7v,Uid:bd32d8f1-1e08-483e-bee4-9fb610250047,Namespace:kube-system,Attempt:1,} returns sandbox id \"fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe\"" Aug 13 07:20:14.341206 containerd[1979]: time="2025-08-13T07:20:14.341163081Z" level=info msg="CreateContainer within sandbox \"fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:20:14.373149 containerd[1979]: time="2025-08-13T07:20:14.373091846Z" level=info msg="CreateContainer within sandbox \"fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e05b9c3df7760784ea9148bb9e0162a1686c623c9f4daebcb0d19a063d6c717\"" Aug 13 07:20:14.374892 containerd[1979]: time="2025-08-13T07:20:14.374854309Z" level=info msg="StartContainer for \"6e05b9c3df7760784ea9148bb9e0162a1686c623c9f4daebcb0d19a063d6c717\"" Aug 13 07:20:14.460408 systemd[1]: Started cri-containerd-6e05b9c3df7760784ea9148bb9e0162a1686c623c9f4daebcb0d19a063d6c717.scope - libcontainer container 6e05b9c3df7760784ea9148bb9e0162a1686c623c9f4daebcb0d19a063d6c717. Aug 13 07:20:14.549703 containerd[1979]: time="2025-08-13T07:20:14.549223525Z" level=info msg="StartContainer for \"6e05b9c3df7760784ea9148bb9e0162a1686c623c9f4daebcb0d19a063d6c717\" returns successfully" Aug 13 07:20:14.892980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084538234.mount: Deactivated successfully. Aug 13 07:20:14.927026 containerd[1979]: time="2025-08-13T07:20:14.926957608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:14.930605 containerd[1979]: time="2025-08-13T07:20:14.930552402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:20:14.933293 containerd[1979]: time="2025-08-13T07:20:14.933231074Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:14.938819 containerd[1979]: time="2025-08-13T07:20:14.937927796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:14.938819 containerd[1979]: time="2025-08-13T07:20:14.938593506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.612351948s" Aug 13 07:20:14.938819 containerd[1979]: time="2025-08-13T07:20:14.938625661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:20:14.940128 containerd[1979]: time="2025-08-13T07:20:14.940071961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:20:14.945315 containerd[1979]: time="2025-08-13T07:20:14.945259698Z" level=info msg="CreateContainer within sandbox \"2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:20:14.968338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052585417.mount: Deactivated successfully. Aug 13 07:20:14.977008 containerd[1979]: time="2025-08-13T07:20:14.974584471Z" level=info msg="CreateContainer within sandbox \"2f7b94b92bd9f519a38f72efb76c26d8820d060a5d26190fc3d2f80b92f78fdc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8acd6d5bdec52bbb0c275ce8344a3a6cff89eb85ab2bb82148ab954e922313cf\"" Aug 13 07:20:14.977448 containerd[1979]: time="2025-08-13T07:20:14.977406819Z" level=info msg="StartContainer for \"8acd6d5bdec52bbb0c275ce8344a3a6cff89eb85ab2bb82148ab954e922313cf\"" Aug 13 07:20:15.041966 systemd[1]: Started cri-containerd-8acd6d5bdec52bbb0c275ce8344a3a6cff89eb85ab2bb82148ab954e922313cf.scope - libcontainer container 8acd6d5bdec52bbb0c275ce8344a3a6cff89eb85ab2bb82148ab954e922313cf. Aug 13 07:20:15.101410 containerd[1979]: time="2025-08-13T07:20:15.101364528Z" level=info msg="StartContainer for \"8acd6d5bdec52bbb0c275ce8344a3a6cff89eb85ab2bb82148ab954e922313cf\" returns successfully" Aug 13 07:20:15.261159 kubelet[3177]: I0813 07:20:15.260874 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-785bf44c44-l2d2t" podStartSLOduration=1.4843894739999999 podStartE2EDuration="9.260856542s" podCreationTimestamp="2025-08-13 07:20:06 +0000 UTC" firstStartedPulling="2025-08-13 07:20:07.163453877 +0000 UTC m=+41.691973624" lastFinishedPulling="2025-08-13 07:20:14.939920944 +0000 UTC m=+49.468440692" observedRunningTime="2025-08-13 07:20:15.239916952 +0000 UTC m=+49.768436711" watchObservedRunningTime="2025-08-13 07:20:15.260856542 +0000 UTC m=+49.789376297" Aug 13 07:20:15.263289 kubelet[3177]: I0813 07:20:15.262000 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-k7r7v" podStartSLOduration=45.261988572 podStartE2EDuration="45.261988572s" podCreationTimestamp="2025-08-13 07:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:20:15.261492316 +0000 UTC m=+49.790012072" watchObservedRunningTime="2025-08-13 07:20:15.261988572 +0000 UTC m=+49.790508344" Aug 13 07:20:15.338941 systemd-networkd[1896]: calibed2aa8a47f: Gained IPv6LL Aug 13 07:20:15.978990 systemd-networkd[1896]: cali42a3971f2b7: Gained IPv6LL Aug 13 07:20:16.203160 systemd[1]: Started sshd@8-172.31.17.50:22-147.75.109.163:38682.service - OpenSSH per-connection server daemon (147.75.109.163:38682). Aug 13 07:20:16.405812 sshd[5818]: Accepted publickey for core from 147.75.109.163 port 38682 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:16.409255 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:16.415731 systemd-logind[1957]: New session 9 of user core. Aug 13 07:20:16.417874 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:20:16.989747 sshd[5818]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:16.994205 systemd-logind[1957]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:20:16.994535 systemd[1]: sshd@8-172.31.17.50:22-147.75.109.163:38682.service: Deactivated successfully. Aug 13 07:20:16.996970 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:20:16.997907 systemd-logind[1957]: Removed session 9. Aug 13 07:20:18.692399 ntpd[1950]: Listen normally on 7 vxlan.calico 192.168.78.0:123 Aug 13 07:20:18.692507 ntpd[1950]: Listen normally on 8 cali35f4fda3662 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 7 vxlan.calico 192.168.78.0:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 8 cali35f4fda3662 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 9 vxlan.calico [fe80::643f:1ff:fe85:3c9%5]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 10 calif7db237a909 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 11 calie37f4c9d927 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 12 califeeae24646f [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 13 cali44bd3786738 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 14 cali72f6de35128 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 15 calibed2aa8a47f [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 07:20:18.693441 ntpd[1950]: 13 Aug 07:20:18 ntpd[1950]: Listen normally on 16 cali42a3971f2b7 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 07:20:18.692566 ntpd[1950]: Listen normally on 9 vxlan.calico [fe80::643f:1ff:fe85:3c9%5]:123 Aug 13 07:20:18.692609 ntpd[1950]: Listen normally on 10 calif7db237a909 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 07:20:18.692650 ntpd[1950]: Listen normally on 11 calie37f4c9d927 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 07:20:18.692713 ntpd[1950]: Listen normally on 12 califeeae24646f [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 07:20:18.692757 ntpd[1950]: Listen normally on 13 cali44bd3786738 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 07:20:18.692798 ntpd[1950]: Listen normally on 14 cali72f6de35128 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 07:20:18.692837 ntpd[1950]: Listen normally on 15 calibed2aa8a47f [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 07:20:18.692873 ntpd[1950]: Listen normally on 16 cali42a3971f2b7 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 07:20:18.985078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605158375.mount: Deactivated successfully. Aug 13 07:20:19.741255 containerd[1979]: time="2025-08-13T07:20:19.741185075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:19.743367 containerd[1979]: time="2025-08-13T07:20:19.743297342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:20:19.746163 containerd[1979]: time="2025-08-13T07:20:19.746088016Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:19.751640 containerd[1979]: time="2025-08-13T07:20:19.750116576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:19.751640 containerd[1979]: time="2025-08-13T07:20:19.751126169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.811020392s" Aug 13 07:20:19.751640 containerd[1979]: time="2025-08-13T07:20:19.751156662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:20:19.768562 containerd[1979]: time="2025-08-13T07:20:19.768527349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:20:19.770532 containerd[1979]: time="2025-08-13T07:20:19.770483613Z" level=info msg="CreateContainer within sandbox \"c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:20:19.805060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599040819.mount: Deactivated successfully. Aug 13 07:20:19.817535 containerd[1979]: time="2025-08-13T07:20:19.817400951Z" level=info msg="CreateContainer within sandbox \"c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e\"" Aug 13 07:20:19.818611 containerd[1979]: time="2025-08-13T07:20:19.818480078Z" level=info msg="StartContainer for \"1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e\"" Aug 13 07:20:19.876900 systemd[1]: Started cri-containerd-1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e.scope - libcontainer container 1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e. Aug 13 07:20:19.930575 containerd[1979]: time="2025-08-13T07:20:19.930522553Z" level=info msg="StartContainer for \"1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e\" returns successfully" Aug 13 07:20:20.269128 kubelet[3177]: I0813 07:20:20.269055 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-wq9m9" podStartSLOduration=26.656570401 podStartE2EDuration="34.269035809s" podCreationTimestamp="2025-08-13 07:19:46 +0000 UTC" firstStartedPulling="2025-08-13 07:20:12.148144591 +0000 UTC m=+46.676664339" lastFinishedPulling="2025-08-13 07:20:19.760610013 +0000 UTC m=+54.289129747" observedRunningTime="2025-08-13 07:20:20.266434721 +0000 UTC m=+54.794954476" watchObservedRunningTime="2025-08-13 07:20:20.269035809 +0000 UTC m=+54.797555564" Aug 13 07:20:21.320856 systemd[1]: run-containerd-runc-k8s.io-1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e-runc.V0TMb5.mount: Deactivated successfully. Aug 13 07:20:21.511275 containerd[1979]: time="2025-08-13T07:20:21.511218624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:21.513478 containerd[1979]: time="2025-08-13T07:20:21.513331183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:20:21.516238 containerd[1979]: time="2025-08-13T07:20:21.515908655Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:21.519634 containerd[1979]: time="2025-08-13T07:20:21.519563470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:21.536714 containerd[1979]: time="2025-08-13T07:20:21.536475011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.767735738s" Aug 13 07:20:21.536714 containerd[1979]: time="2025-08-13T07:20:21.536529086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:20:21.538045 containerd[1979]: time="2025-08-13T07:20:21.537964812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:20:21.544586 containerd[1979]: time="2025-08-13T07:20:21.544553188Z" level=info msg="CreateContainer within sandbox \"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:20:21.621605 containerd[1979]: time="2025-08-13T07:20:21.621291147Z" level=info msg="CreateContainer within sandbox \"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b2fe42305504097fe52653ec0f10e741d7c8c45f55f870785e7005c8aee65b64\"" Aug 13 07:20:21.622839 containerd[1979]: time="2025-08-13T07:20:21.622761310Z" level=info msg="StartContainer for \"b2fe42305504097fe52653ec0f10e741d7c8c45f55f870785e7005c8aee65b64\"" Aug 13 07:20:21.670992 systemd[1]: Started cri-containerd-b2fe42305504097fe52653ec0f10e741d7c8c45f55f870785e7005c8aee65b64.scope - libcontainer container b2fe42305504097fe52653ec0f10e741d7c8c45f55f870785e7005c8aee65b64. Aug 13 07:20:21.709716 containerd[1979]: time="2025-08-13T07:20:21.709639654Z" level=info msg="StartContainer for \"b2fe42305504097fe52653ec0f10e741d7c8c45f55f870785e7005c8aee65b64\" returns successfully" Aug 13 07:20:22.027132 systemd[1]: Started sshd@9-172.31.17.50:22-147.75.109.163:39102.service - OpenSSH per-connection server daemon (147.75.109.163:39102). Aug 13 07:20:22.248123 sshd[5975]: Accepted publickey for core from 147.75.109.163 port 39102 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:22.251100 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:22.256489 systemd-logind[1957]: New session 10 of user core. Aug 13 07:20:22.260291 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:20:23.110217 sshd[5975]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:23.113595 systemd-logind[1957]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:20:23.114219 systemd[1]: sshd@9-172.31.17.50:22-147.75.109.163:39102.service: Deactivated successfully. Aug 13 07:20:23.117274 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:20:23.119733 systemd-logind[1957]: Removed session 10. Aug 13 07:20:23.147780 systemd[1]: Started sshd@10-172.31.17.50:22-147.75.109.163:39118.service - OpenSSH per-connection server daemon (147.75.109.163:39118). Aug 13 07:20:23.329613 sshd[6011]: Accepted publickey for core from 147.75.109.163 port 39118 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:23.331829 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:23.339432 systemd-logind[1957]: New session 11 of user core. Aug 13 07:20:23.348979 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:20:23.900714 sshd[6011]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:23.917892 systemd[1]: sshd@10-172.31.17.50:22-147.75.109.163:39118.service: Deactivated successfully. Aug 13 07:20:23.919833 systemd-logind[1957]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:20:23.923256 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:20:23.943867 systemd-logind[1957]: Removed session 11. Aug 13 07:20:23.954087 systemd[1]: Started sshd@11-172.31.17.50:22-147.75.109.163:39122.service - OpenSSH per-connection server daemon (147.75.109.163:39122). Aug 13 07:20:24.187496 sshd[6026]: Accepted publickey for core from 147.75.109.163 port 39122 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:24.190863 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:24.200415 systemd-logind[1957]: New session 12 of user core. Aug 13 07:20:24.206247 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:20:24.594932 sshd[6026]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:24.601752 systemd[1]: sshd@11-172.31.17.50:22-147.75.109.163:39122.service: Deactivated successfully. Aug 13 07:20:24.605609 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:20:24.607418 systemd-logind[1957]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:20:24.609210 systemd-logind[1957]: Removed session 12. Aug 13 07:20:25.256987 containerd[1979]: time="2025-08-13T07:20:25.256920191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:25.259092 containerd[1979]: time="2025-08-13T07:20:25.259029196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:20:25.262552 containerd[1979]: time="2025-08-13T07:20:25.262485857Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:25.268701 containerd[1979]: time="2025-08-13T07:20:25.267537884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:25.269003 containerd[1979]: time="2025-08-13T07:20:25.268975651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.730771212s" Aug 13 07:20:25.269139 containerd[1979]: time="2025-08-13T07:20:25.269122562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:20:25.415548 containerd[1979]: time="2025-08-13T07:20:25.415197862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:20:25.445968 containerd[1979]: time="2025-08-13T07:20:25.445924529Z" level=info msg="CreateContainer within sandbox \"16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:20:25.513537 containerd[1979]: time="2025-08-13T07:20:25.513331624Z" level=info msg="CreateContainer within sandbox \"16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2cb1d1c03dab72bf9573e0c31ba3767b581b07a423725050e71b30f988a9b434\"" Aug 13 07:20:25.515397 containerd[1979]: time="2025-08-13T07:20:25.514316284Z" level=info msg="StartContainer for \"2cb1d1c03dab72bf9573e0c31ba3767b581b07a423725050e71b30f988a9b434\"" Aug 13 07:20:25.587351 systemd[1]: Started cri-containerd-2cb1d1c03dab72bf9573e0c31ba3767b581b07a423725050e71b30f988a9b434.scope - libcontainer container 2cb1d1c03dab72bf9573e0c31ba3767b581b07a423725050e71b30f988a9b434. Aug 13 07:20:25.652007 containerd[1979]: time="2025-08-13T07:20:25.651599222Z" level=info msg="StartContainer for \"2cb1d1c03dab72bf9573e0c31ba3767b581b07a423725050e71b30f988a9b434\" returns successfully" Aug 13 07:20:26.336430 containerd[1979]: time="2025-08-13T07:20:26.336382509Z" level=info msg="StopPodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\"" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:26.890 [WARNING][6095] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c765fc0f-d20b-48b8-a94e-e269e9f26813", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b", Pod:"calico-apiserver-745bf746c6-vph99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed2aa8a47f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:26.895 [INFO][6095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:26.895 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" iface="eth0" netns="" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:26.895 [INFO][6095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:26.895 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.355 [INFO][6105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.362 [INFO][6105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.365 [INFO][6105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.383 [WARNING][6105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.383 [INFO][6105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.385 [INFO][6105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:27.395701 containerd[1979]: 2025-08-13 07:20:27.388 [INFO][6095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:27.419588 containerd[1979]: time="2025-08-13T07:20:27.419530364Z" level=info msg="TearDown network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" successfully" Aug 13 07:20:27.419791 containerd[1979]: time="2025-08-13T07:20:27.419770269Z" level=info msg="StopPodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" returns successfully" Aug 13 07:20:27.682726 kubelet[3177]: I0813 07:20:27.682240 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:20:27.940068 containerd[1979]: time="2025-08-13T07:20:27.939549553Z" level=info msg="RemovePodSandbox for \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\"" Aug 13 07:20:27.971858 containerd[1979]: time="2025-08-13T07:20:27.971807594Z" level=info msg="Forcibly stopping sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\"" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.052 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c765fc0f-d20b-48b8-a94e-e269e9f26813", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b", Pod:"calico-apiserver-745bf746c6-vph99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibed2aa8a47f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.053 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.053 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" iface="eth0" netns="" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.053 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.053 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.097 [INFO][6147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.098 [INFO][6147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.098 [INFO][6147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.107 [WARNING][6147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.107 [INFO][6147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" HandleID="k8s-pod-network.e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--vph99-eth0" Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.109 [INFO][6147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.119753 containerd[1979]: 2025-08-13 07:20:28.116 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc" Aug 13 07:20:28.121513 containerd[1979]: time="2025-08-13T07:20:28.119834753Z" level=info msg="TearDown network for sandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" successfully" Aug 13 07:20:28.133040 containerd[1979]: time="2025-08-13T07:20:28.132978764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:28.144508 containerd[1979]: time="2025-08-13T07:20:28.144390296Z" level=info msg="RemovePodSandbox \"e36dea76ee52e9a7e8a0ed88f845f699112a124e03b3446511e1bdb0466711bc\" returns successfully" Aug 13 07:20:28.158359 containerd[1979]: time="2025-08-13T07:20:28.157771092Z" level=info msg="StopPodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\"" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.244 [WARNING][6161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568b68e-bbde-4103-a6ee-c309f4a8bbce", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76", Pod:"csi-node-driver-xh28p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeeae24646f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.245 [INFO][6161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.245 [INFO][6161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" iface="eth0" netns="" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.245 [INFO][6161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.245 [INFO][6161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.275 [INFO][6168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.276 [INFO][6168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.276 [INFO][6168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.283 [WARNING][6168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.283 [INFO][6168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.285 [INFO][6168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.290773 containerd[1979]: 2025-08-13 07:20:28.287 [INFO][6161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.290773 containerd[1979]: time="2025-08-13T07:20:28.289475322Z" level=info msg="TearDown network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" successfully" Aug 13 07:20:28.290773 containerd[1979]: time="2025-08-13T07:20:28.289506793Z" level=info msg="StopPodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" returns successfully" Aug 13 07:20:28.290773 containerd[1979]: time="2025-08-13T07:20:28.290015928Z" level=info msg="RemovePodSandbox for \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\"" Aug 13 07:20:28.290773 containerd[1979]: time="2025-08-13T07:20:28.290049748Z" level=info msg="Forcibly stopping sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\"" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.332 [WARNING][6183] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4568b68e-bbde-4103-a6ee-c309f4a8bbce", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76", Pod:"csi-node-driver-xh28p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.78.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeeae24646f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.333 [INFO][6183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.333 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" iface="eth0" netns="" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.333 [INFO][6183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.333 [INFO][6183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.357 [INFO][6190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.357 [INFO][6190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.357 [INFO][6190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.368 [WARNING][6190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.368 [INFO][6190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" HandleID="k8s-pod-network.e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Workload="ip--172--31--17--50-k8s-csi--node--driver--xh28p-eth0" Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.370 [INFO][6190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.375060 containerd[1979]: 2025-08-13 07:20:28.372 [INFO][6183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3" Aug 13 07:20:28.376378 containerd[1979]: time="2025-08-13T07:20:28.375106747Z" level=info msg="TearDown network for sandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" successfully" Aug 13 07:20:28.382933 containerd[1979]: time="2025-08-13T07:20:28.382873922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:28.385079 containerd[1979]: time="2025-08-13T07:20:28.382948041Z" level=info msg="RemovePodSandbox \"e8e1d6b5723830069e93797907596f19908927f538b1e5bcbf907202f19ce5a3\" returns successfully" Aug 13 07:20:28.385079 containerd[1979]: time="2025-08-13T07:20:28.383423092Z" level=info msg="StopPodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\"" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.437 [WARNING][6204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bd32d8f1-1e08-483e-bee4-9fb610250047", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe", Pod:"coredns-7c65d6cfc9-k7r7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a3971f2b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.437 [INFO][6204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.437 [INFO][6204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" iface="eth0" netns="" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.437 [INFO][6204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.437 [INFO][6204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.471 [INFO][6211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.471 [INFO][6211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.471 [INFO][6211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.477 [WARNING][6211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.478 [INFO][6211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.479 [INFO][6211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.483889 containerd[1979]: 2025-08-13 07:20:28.481 [INFO][6204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.487367 containerd[1979]: time="2025-08-13T07:20:28.483931995Z" level=info msg="TearDown network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" successfully" Aug 13 07:20:28.487367 containerd[1979]: time="2025-08-13T07:20:28.483962797Z" level=info msg="StopPodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" returns successfully" Aug 13 07:20:28.487367 containerd[1979]: time="2025-08-13T07:20:28.484532550Z" level=info msg="RemovePodSandbox for \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\"" Aug 13 07:20:28.487367 containerd[1979]: time="2025-08-13T07:20:28.484565084Z" level=info msg="Forcibly stopping sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\"" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.522 [WARNING][6225] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bd32d8f1-1e08-483e-bee4-9fb610250047", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"fcdb08123485aa8ca96bce99bba5afd0b9b04061e3565517c232929428e739fe", Pod:"coredns-7c65d6cfc9-k7r7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a3971f2b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.524 [INFO][6225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.524 [INFO][6225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" iface="eth0" netns="" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.524 [INFO][6225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.524 [INFO][6225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.557 [INFO][6232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.557 [INFO][6232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.558 [INFO][6232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.563 [WARNING][6232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.563 [INFO][6232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" HandleID="k8s-pod-network.905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--k7r7v-eth0" Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.565 [INFO][6232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.570711 containerd[1979]: 2025-08-13 07:20:28.567 [INFO][6225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff" Aug 13 07:20:28.570711 containerd[1979]: time="2025-08-13T07:20:28.569107150Z" level=info msg="TearDown network for sandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" successfully" Aug 13 07:20:28.574282 containerd[1979]: time="2025-08-13T07:20:28.574233668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:28.574372 containerd[1979]: time="2025-08-13T07:20:28.574309916Z" level=info msg="RemovePodSandbox \"905d51da9c02d04265ffd1e261b479e99c11dfa5c690a4326ef9fb7de6bfc2ff\" returns successfully" Aug 13 07:20:28.574933 containerd[1979]: time="2025-08-13T07:20:28.574903181Z" level=info msg="StopPodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\"" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.618 [WARNING][6247] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.618 [INFO][6247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.618 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" iface="eth0" netns="" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.618 [INFO][6247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.618 [INFO][6247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.643 [INFO][6255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.643 [INFO][6255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.643 [INFO][6255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.650 [WARNING][6255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.650 [INFO][6255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.652 [INFO][6255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.657240 containerd[1979]: 2025-08-13 07:20:28.654 [INFO][6247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.658975 containerd[1979]: time="2025-08-13T07:20:28.657291867Z" level=info msg="TearDown network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" successfully" Aug 13 07:20:28.658975 containerd[1979]: time="2025-08-13T07:20:28.657319064Z" level=info msg="StopPodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" returns successfully" Aug 13 07:20:28.658975 containerd[1979]: time="2025-08-13T07:20:28.658123655Z" level=info msg="RemovePodSandbox for \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\"" Aug 13 07:20:28.658975 containerd[1979]: time="2025-08-13T07:20:28.658156828Z" level=info msg="Forcibly stopping sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\"" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.707 [WARNING][6269] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" WorkloadEndpoint="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.707 [INFO][6269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.707 [INFO][6269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" iface="eth0" netns="" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.707 [INFO][6269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.707 [INFO][6269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.730 [INFO][6276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.730 [INFO][6276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.730 [INFO][6276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.736 [WARNING][6276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.736 [INFO][6276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" HandleID="k8s-pod-network.6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Workload="ip--172--31--17--50-k8s-whisker--59f745bdfb--2h8n7-eth0" Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.738 [INFO][6276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.743535 containerd[1979]: 2025-08-13 07:20:28.740 [INFO][6269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b" Aug 13 07:20:28.745663 containerd[1979]: time="2025-08-13T07:20:28.743582695Z" level=info msg="TearDown network for sandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" successfully" Aug 13 07:20:28.760154 containerd[1979]: time="2025-08-13T07:20:28.760107542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:28.760327 containerd[1979]: time="2025-08-13T07:20:28.760177344Z" level=info msg="RemovePodSandbox \"6ec4de0ad1d235c617f973654eac3fc60df5d773b0ba304ba6427c308bcfc41b\" returns successfully" Aug 13 07:20:28.761066 containerd[1979]: time="2025-08-13T07:20:28.760758334Z" level=info msg="StopPodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\"" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.807 [WARNING][6290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"55eb4daa-f610-43f7-8938-3afe0a026eea", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85", Pod:"calico-apiserver-745bf746c6-bcs2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f6de35128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.808 [INFO][6290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.808 [INFO][6290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" iface="eth0" netns="" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.808 [INFO][6290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.808 [INFO][6290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.841 [INFO][6297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.842 [INFO][6297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.842 [INFO][6297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.848 [WARNING][6297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.848 [INFO][6297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.850 [INFO][6297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.853884 containerd[1979]: 2025-08-13 07:20:28.852 [INFO][6290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.853884 containerd[1979]: time="2025-08-13T07:20:28.853863720Z" level=info msg="TearDown network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" successfully" Aug 13 07:20:28.853884 containerd[1979]: time="2025-08-13T07:20:28.853890557Z" level=info msg="StopPodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" returns successfully" Aug 13 07:20:28.854537 containerd[1979]: time="2025-08-13T07:20:28.854334859Z" level=info msg="RemovePodSandbox for \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\"" Aug 13 07:20:28.854537 containerd[1979]: time="2025-08-13T07:20:28.854358885Z" level=info msg="Forcibly stopping sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\"" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.888 [WARNING][6311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0", GenerateName:"calico-apiserver-745bf746c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"55eb4daa-f610-43f7-8938-3afe0a026eea", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745bf746c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"16cc102e6703c2a419ccab2a9a4076a4d33267b3f851e11a36bfa58c14c2ff85", Pod:"calico-apiserver-745bf746c6-bcs2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72f6de35128", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.888 [INFO][6311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.888 [INFO][6311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" iface="eth0" netns="" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.888 [INFO][6311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.888 [INFO][6311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.911 [INFO][6318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.911 [INFO][6318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.911 [INFO][6318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.917 [WARNING][6318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.917 [INFO][6318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" HandleID="k8s-pod-network.b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Workload="ip--172--31--17--50-k8s-calico--apiserver--745bf746c6--bcs2t-eth0" Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.919 [INFO][6318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:28.922768 containerd[1979]: 2025-08-13 07:20:28.920 [INFO][6311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde" Aug 13 07:20:28.922768 containerd[1979]: time="2025-08-13T07:20:28.922656330Z" level=info msg="TearDown network for sandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" successfully" Aug 13 07:20:28.930912 containerd[1979]: time="2025-08-13T07:20:28.930867163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:28.931082 containerd[1979]: time="2025-08-13T07:20:28.931007489Z" level=info msg="RemovePodSandbox \"b95020e0a983faec8c5bc2602fc954507dcc7d01444cf240b2e2e6fa324e6dde\" returns successfully" Aug 13 07:20:28.931631 containerd[1979]: time="2025-08-13T07:20:28.931598303Z" level=info msg="StopPodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\"" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:28.971 [WARNING][6332] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"dc25405d-dcbb-4f86-bd65-6c51296b34d0", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8", Pod:"goldmane-58fd7646b9-wq9m9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie37f4c9d927", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:28.972 [INFO][6332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:28.972 [INFO][6332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" iface="eth0" netns="" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:28.972 [INFO][6332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:28.972 [INFO][6332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.013 [INFO][6339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.013 [INFO][6339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.013 [INFO][6339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.025 [WARNING][6339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.025 [INFO][6339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.027 [INFO][6339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:29.034650 containerd[1979]: 2025-08-13 07:20:29.031 [INFO][6332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.036001 containerd[1979]: time="2025-08-13T07:20:29.034762882Z" level=info msg="TearDown network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" successfully" Aug 13 07:20:29.036001 containerd[1979]: time="2025-08-13T07:20:29.035142622Z" level=info msg="StopPodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" returns successfully" Aug 13 07:20:29.036131 containerd[1979]: time="2025-08-13T07:20:29.036070986Z" level=info msg="RemovePodSandbox for \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\"" Aug 13 07:20:29.036403 containerd[1979]: time="2025-08-13T07:20:29.036377595Z" level=info msg="Forcibly stopping sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\"" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.100 [WARNING][6357] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"dc25405d-dcbb-4f86-bd65-6c51296b34d0", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"c24fe755b29ad2377653161c5603fff0ff3fcfa42bed05e25b552e0a4bbb33f8", Pod:"goldmane-58fd7646b9-wq9m9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.78.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie37f4c9d927", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.101 [INFO][6357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.101 [INFO][6357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" iface="eth0" netns="" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.101 [INFO][6357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.102 [INFO][6357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.157 [INFO][6365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.157 [INFO][6365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.158 [INFO][6365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.170 [WARNING][6365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.170 [INFO][6365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" HandleID="k8s-pod-network.188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Workload="ip--172--31--17--50-k8s-goldmane--58fd7646b9--wq9m9-eth0" Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.173 [INFO][6365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:29.179628 containerd[1979]: 2025-08-13 07:20:29.175 [INFO][6357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a" Aug 13 07:20:29.180832 containerd[1979]: time="2025-08-13T07:20:29.179805251Z" level=info msg="TearDown network for sandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" successfully" Aug 13 07:20:29.200005 containerd[1979]: time="2025-08-13T07:20:29.199950206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:29.200164 containerd[1979]: time="2025-08-13T07:20:29.200036055Z" level=info msg="RemovePodSandbox \"188f06f08da4605d4a15555fa5b86481a4da3f0324efea51e24f06cb10e0a74a\" returns successfully" Aug 13 07:20:29.200741 containerd[1979]: time="2025-08-13T07:20:29.200696796Z" level=info msg="StopPodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\"" Aug 13 07:20:29.315228 systemd[1]: run-containerd-runc-k8s.io-1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e-runc.9KxnDa.mount: Deactivated successfully. Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.337 [WARNING][6379] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0", GenerateName:"calico-kube-controllers-5696577845-", Namespace:"calico-system", SelfLink:"", UID:"8fda3464-db28-46a1-ac0f-62ebd7a09936", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5696577845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05", Pod:"calico-kube-controllers-5696577845-qp9gz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44bd3786738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.337 [INFO][6379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.337 [INFO][6379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" iface="eth0" netns="" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.337 [INFO][6379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.337 [INFO][6379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.432 [INFO][6401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.433 [INFO][6401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.433 [INFO][6401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.448 [WARNING][6401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.448 [INFO][6401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.452 [INFO][6401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:29.466997 containerd[1979]: 2025-08-13 07:20:29.462 [INFO][6379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.466997 containerd[1979]: time="2025-08-13T07:20:29.466972886Z" level=info msg="TearDown network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" successfully" Aug 13 07:20:29.469400 containerd[1979]: time="2025-08-13T07:20:29.467007982Z" level=info msg="StopPodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" returns successfully" Aug 13 07:20:29.469400 containerd[1979]: time="2025-08-13T07:20:29.468141731Z" level=info msg="RemovePodSandbox for \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\"" Aug 13 07:20:29.469400 containerd[1979]: time="2025-08-13T07:20:29.468179290Z" level=info msg="Forcibly stopping sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\"" Aug 13 07:20:29.698349 systemd[1]: Started sshd@12-172.31.17.50:22-147.75.109.163:58496.service - OpenSSH per-connection server daemon (147.75.109.163:58496). Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.570 [WARNING][6419] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0", GenerateName:"calico-kube-controllers-5696577845-", Namespace:"calico-system", SelfLink:"", UID:"8fda3464-db28-46a1-ac0f-62ebd7a09936", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5696577845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05", Pod:"calico-kube-controllers-5696577845-qp9gz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.78.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44bd3786738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.571 [INFO][6419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.571 [INFO][6419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" iface="eth0" netns="" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.571 [INFO][6419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.571 [INFO][6419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.619 [INFO][6426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.630 [INFO][6426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.630 [INFO][6426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.683 [WARNING][6426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.683 [INFO][6426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" HandleID="k8s-pod-network.b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Workload="ip--172--31--17--50-k8s-calico--kube--controllers--5696577845--qp9gz-eth0" Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.691 [INFO][6426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:29.704944 containerd[1979]: 2025-08-13 07:20:29.696 [INFO][6419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14" Aug 13 07:20:29.704944 containerd[1979]: time="2025-08-13T07:20:29.704565442Z" level=info msg="TearDown network for sandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" successfully" Aug 13 07:20:29.715523 containerd[1979]: time="2025-08-13T07:20:29.714291894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:29.715523 containerd[1979]: time="2025-08-13T07:20:29.714373000Z" level=info msg="RemovePodSandbox \"b93d8f99246fb56d67fa1129922fe22220d27dbf0135ce19e15f37a92caa8c14\" returns successfully" Aug 13 07:20:29.771125 containerd[1979]: time="2025-08-13T07:20:29.770742831Z" level=info msg="StopPodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\"" Aug 13 07:20:29.987808 sshd[6437]: Accepted publickey for core from 147.75.109.163 port 58496 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:29.990952 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.892 [WARNING][6446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78b6f36b-2fe6-4969-b860-8d05648c6f1d", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3", Pod:"coredns-7c65d6cfc9-f4dvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7db237a909", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.892 [INFO][6446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.892 [INFO][6446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" iface="eth0" netns="" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.892 [INFO][6446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.892 [INFO][6446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.958 [INFO][6460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.958 [INFO][6460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.958 [INFO][6460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.976 [WARNING][6460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.976 [INFO][6460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.980 [INFO][6460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:29.991735 containerd[1979]: 2025-08-13 07:20:29.984 [INFO][6446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:29.993362 containerd[1979]: time="2025-08-13T07:20:29.992748248Z" level=info msg="TearDown network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" successfully" Aug 13 07:20:29.993410 containerd[1979]: time="2025-08-13T07:20:29.993384999Z" level=info msg="StopPodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" returns successfully" Aug 13 07:20:29.994929 containerd[1979]: time="2025-08-13T07:20:29.994662058Z" level=info msg="RemovePodSandbox for \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\"" Aug 13 07:20:29.994929 containerd[1979]: time="2025-08-13T07:20:29.994719480Z" level=info msg="Forcibly stopping sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\"" Aug 13 07:20:30.009292 systemd-logind[1957]: New session 13 of user core. Aug 13 07:20:30.014998 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.087 [WARNING][6474] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78b6f36b-2fe6-4969-b860-8d05648c6f1d", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 19, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-50", ContainerID:"c101e70c7926ba4b1bc385eb5b131f2e3c7f556cc32877025b1160866fcf4be3", Pod:"coredns-7c65d6cfc9-f4dvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.78.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7db237a909", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.088 [INFO][6474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.088 [INFO][6474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" iface="eth0" netns="" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.088 [INFO][6474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.088 [INFO][6474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.155 [INFO][6482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.155 [INFO][6482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.155 [INFO][6482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.168 [WARNING][6482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.168 [INFO][6482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" HandleID="k8s-pod-network.d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Workload="ip--172--31--17--50-k8s-coredns--7c65d6cfc9--f4dvw-eth0" Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.170 [INFO][6482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:20:30.180520 containerd[1979]: 2025-08-13 07:20:30.173 [INFO][6474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4" Aug 13 07:20:30.182364 containerd[1979]: time="2025-08-13T07:20:30.181444607Z" level=info msg="TearDown network for sandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" successfully" Aug 13 07:20:30.196823 containerd[1979]: time="2025-08-13T07:20:30.196780115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:20:30.197064 containerd[1979]: time="2025-08-13T07:20:30.197040663Z" level=info msg="RemovePodSandbox \"d4edf3ef644145418fd7b8426c8c16c42e186ed8e08e06b6163d66c90656ded4\" returns successfully" Aug 13 07:20:31.126324 sshd[6437]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:31.135862 systemd[1]: sshd@12-172.31.17.50:22-147.75.109.163:58496.service: Deactivated successfully. Aug 13 07:20:31.139354 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:20:31.143725 systemd-logind[1957]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:20:31.147806 systemd-logind[1957]: Removed session 13. Aug 13 07:20:31.854441 containerd[1979]: time="2025-08-13T07:20:31.854267604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:31.867349 containerd[1979]: time="2025-08-13T07:20:31.867280493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:20:31.903085 containerd[1979]: time="2025-08-13T07:20:31.902999602Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:31.928778 containerd[1979]: time="2025-08-13T07:20:31.928734009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:31.943509 containerd[1979]: time="2025-08-13T07:20:31.943432750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.525807614s" Aug 13 07:20:31.943509 containerd[1979]: time="2025-08-13T07:20:31.943486039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:20:32.025117 containerd[1979]: time="2025-08-13T07:20:32.025058933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:20:32.177881 containerd[1979]: time="2025-08-13T07:20:32.177827968Z" level=info msg="CreateContainer within sandbox \"bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:20:32.227261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164503599.mount: Deactivated successfully. Aug 13 07:20:32.261097 containerd[1979]: time="2025-08-13T07:20:32.261050101Z" level=info msg="CreateContainer within sandbox \"bf947c5f34fcd4b4f87ee86be7d1f50208266894fd538396af1f7ecc7691be05\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1\"" Aug 13 07:20:32.270935 containerd[1979]: time="2025-08-13T07:20:32.270455950Z" level=info msg="StartContainer for \"0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1\"" Aug 13 07:20:32.365371 containerd[1979]: time="2025-08-13T07:20:32.365326075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:20:32.368319 containerd[1979]: time="2025-08-13T07:20:32.367337603Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:32.370170 containerd[1979]: time="2025-08-13T07:20:32.370140275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 345.024507ms" Aug 13 07:20:32.370300 containerd[1979]: time="2025-08-13T07:20:32.370286922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:20:32.385066 containerd[1979]: time="2025-08-13T07:20:32.384865334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:20:32.391707 containerd[1979]: time="2025-08-13T07:20:32.390662841Z" level=info msg="CreateContainer within sandbox \"77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:20:32.428209 containerd[1979]: time="2025-08-13T07:20:32.427916386Z" level=info msg="CreateContainer within sandbox \"77e7882ab85ab46a9b095398fa1ccfb691d760e88c657ad9f0e287c18d09c80b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f040b11a643d36a7385d205aa950097a7d6bc465674fc0d8ddb8af05fcac0813\"" Aug 13 07:20:32.433915 containerd[1979]: time="2025-08-13T07:20:32.433865201Z" level=info msg="StartContainer for \"f040b11a643d36a7385d205aa950097a7d6bc465674fc0d8ddb8af05fcac0813\"" Aug 13 07:20:32.725884 systemd[1]: Started cri-containerd-0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1.scope - libcontainer container 0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1. Aug 13 07:20:32.735362 systemd[1]: Started cri-containerd-f040b11a643d36a7385d205aa950097a7d6bc465674fc0d8ddb8af05fcac0813.scope - libcontainer container f040b11a643d36a7385d205aa950097a7d6bc465674fc0d8ddb8af05fcac0813. Aug 13 07:20:32.833898 containerd[1979]: time="2025-08-13T07:20:32.833857726Z" level=info msg="StartContainer for \"f040b11a643d36a7385d205aa950097a7d6bc465674fc0d8ddb8af05fcac0813\" returns successfully" Aug 13 07:20:32.868329 containerd[1979]: time="2025-08-13T07:20:32.868266111Z" level=info msg="StartContainer for \"0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1\" returns successfully" Aug 13 07:20:33.619248 kubelet[3177]: I0813 07:20:33.600380 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5696577845-qp9gz" podStartSLOduration=28.58915782 podStartE2EDuration="46.572967682s" podCreationTimestamp="2025-08-13 07:19:47 +0000 UTC" firstStartedPulling="2025-08-13 07:20:13.997250542 +0000 UTC m=+48.525770291" lastFinishedPulling="2025-08-13 07:20:31.981060417 +0000 UTC m=+66.509580153" observedRunningTime="2025-08-13 07:20:33.554215974 +0000 UTC m=+68.082735731" watchObservedRunningTime="2025-08-13 07:20:33.572967682 +0000 UTC m=+68.101487436" Aug 13 07:20:33.661305 kubelet[3177]: I0813 07:20:33.660078 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745bf746c6-bcs2t" podStartSLOduration=41.290609794 podStartE2EDuration="52.660053576s" podCreationTimestamp="2025-08-13 07:19:41 +0000 UTC" firstStartedPulling="2025-08-13 07:20:13.993750271 +0000 UTC m=+48.522270017" lastFinishedPulling="2025-08-13 07:20:25.363194059 +0000 UTC m=+59.891713799" observedRunningTime="2025-08-13 07:20:26.459633596 +0000 UTC m=+60.988153353" watchObservedRunningTime="2025-08-13 07:20:33.660053576 +0000 UTC m=+68.188573334" Aug 13 07:20:33.664879 kubelet[3177]: I0813 07:20:33.664813 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745bf746c6-vph99" podStartSLOduration=34.535575588 podStartE2EDuration="52.664793201s" podCreationTimestamp="2025-08-13 07:19:41 +0000 UTC" firstStartedPulling="2025-08-13 07:20:14.255469819 +0000 UTC m=+48.783989556" lastFinishedPulling="2025-08-13 07:20:32.384687404 +0000 UTC m=+66.913207169" observedRunningTime="2025-08-13 07:20:33.619998286 +0000 UTC m=+68.148518033" watchObservedRunningTime="2025-08-13 07:20:33.664793201 +0000 UTC m=+68.193312958" Aug 13 07:20:34.107275 kubelet[3177]: I0813 07:20:34.106450 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:20:34.166815 systemd[1]: run-containerd-runc-k8s.io-0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1-runc.cXnE1Z.mount: Deactivated successfully. Aug 13 07:20:35.851137 kubelet[3177]: I0813 07:20:35.851108 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:20:35.994969 containerd[1979]: time="2025-08-13T07:20:35.994728708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:35.997865 containerd[1979]: time="2025-08-13T07:20:35.997811095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:20:36.002446 containerd[1979]: time="2025-08-13T07:20:36.001729626Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:36.013101 containerd[1979]: time="2025-08-13T07:20:36.012475364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:20:36.017813 containerd[1979]: time="2025-08-13T07:20:36.017765862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.632862284s" Aug 13 07:20:36.017813 containerd[1979]: time="2025-08-13T07:20:36.017815968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:20:36.027961 containerd[1979]: time="2025-08-13T07:20:36.027912040Z" level=info msg="CreateContainer within sandbox \"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:20:36.092651 containerd[1979]: time="2025-08-13T07:20:36.092607454Z" level=info msg="CreateContainer within sandbox \"036b16cf9160f7b9930c0a09c97f5bb4a44abbca30f63718847525c6a1023e76\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f20877d4ce411e069bfd71df41b927b0c9f170863d8afcc268175121cbc632dc\"" Aug 13 07:20:36.094347 containerd[1979]: time="2025-08-13T07:20:36.093373792Z" level=info msg="StartContainer for \"f20877d4ce411e069bfd71df41b927b0c9f170863d8afcc268175121cbc632dc\"" Aug 13 07:20:36.169981 systemd[1]: Started sshd@13-172.31.17.50:22-147.75.109.163:58500.service - OpenSSH per-connection server daemon (147.75.109.163:58500). Aug 13 07:20:36.257093 systemd[1]: Started cri-containerd-f20877d4ce411e069bfd71df41b927b0c9f170863d8afcc268175121cbc632dc.scope - libcontainer container f20877d4ce411e069bfd71df41b927b0c9f170863d8afcc268175121cbc632dc. Aug 13 07:20:36.308563 containerd[1979]: time="2025-08-13T07:20:36.308355464Z" level=info msg="StartContainer for \"f20877d4ce411e069bfd71df41b927b0c9f170863d8afcc268175121cbc632dc\" returns successfully" Aug 13 07:20:36.461121 sshd[6675]: Accepted publickey for core from 147.75.109.163 port 58500 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:36.465554 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:36.471925 systemd-logind[1957]: New session 14 of user core. Aug 13 07:20:36.475865 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:20:37.023980 kubelet[3177]: I0813 07:20:37.012599 3177 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:20:37.024424 kubelet[3177]: I0813 07:20:37.024002 3177 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:20:37.663239 sshd[6675]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:37.667700 systemd[1]: sshd@13-172.31.17.50:22-147.75.109.163:58500.service: Deactivated successfully. Aug 13 07:20:37.669528 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:20:37.672250 systemd-logind[1957]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:20:37.673453 systemd-logind[1957]: Removed session 14. Aug 13 07:20:41.142690 kubelet[3177]: I0813 07:20:41.140955 3177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:20:41.163215 kubelet[3177]: I0813 07:20:41.163108 3177 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xh28p" podStartSLOduration=30.35590797 podStartE2EDuration="54.163089318s" podCreationTimestamp="2025-08-13 07:19:47 +0000 UTC" firstStartedPulling="2025-08-13 07:20:12.211628291 +0000 UTC m=+46.740148030" lastFinishedPulling="2025-08-13 07:20:36.018809642 +0000 UTC m=+70.547329378" observedRunningTime="2025-08-13 07:20:37.244183337 +0000 UTC m=+71.772703093" watchObservedRunningTime="2025-08-13 07:20:41.163089318 +0000 UTC m=+75.691609072" Aug 13 07:20:42.707045 systemd[1]: Started sshd@14-172.31.17.50:22-147.75.109.163:60390.service - OpenSSH per-connection server daemon (147.75.109.163:60390). Aug 13 07:20:42.946070 sshd[6714]: Accepted publickey for core from 147.75.109.163 port 60390 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:42.949497 sshd[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:42.955404 systemd-logind[1957]: New session 15 of user core. Aug 13 07:20:42.958892 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:20:43.603486 sshd[6714]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:43.611280 systemd[1]: sshd@14-172.31.17.50:22-147.75.109.163:60390.service: Deactivated successfully. Aug 13 07:20:43.613436 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:20:43.615463 systemd-logind[1957]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:20:43.616714 systemd-logind[1957]: Removed session 15. Aug 13 07:20:48.647329 systemd[1]: Started sshd@15-172.31.17.50:22-147.75.109.163:59862.service - OpenSSH per-connection server daemon (147.75.109.163:59862). Aug 13 07:20:48.966262 sshd[6727]: Accepted publickey for core from 147.75.109.163 port 59862 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:48.971173 sshd[6727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:48.981328 systemd-logind[1957]: New session 16 of user core. Aug 13 07:20:48.987114 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:20:49.816739 sshd[6727]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:49.819987 systemd[1]: sshd@15-172.31.17.50:22-147.75.109.163:59862.service: Deactivated successfully. Aug 13 07:20:49.822357 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:20:49.824606 systemd-logind[1957]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:20:49.825991 systemd-logind[1957]: Removed session 16. Aug 13 07:20:49.853135 systemd[1]: Started sshd@16-172.31.17.50:22-147.75.109.163:59868.service - OpenSSH per-connection server daemon (147.75.109.163:59868). Aug 13 07:20:50.036448 sshd[6744]: Accepted publickey for core from 147.75.109.163 port 59868 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:50.037137 sshd[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:50.042243 systemd-logind[1957]: New session 17 of user core. Aug 13 07:20:50.050902 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:20:50.751526 sshd[6744]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:50.755985 systemd-logind[1957]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:20:50.756959 systemd[1]: sshd@16-172.31.17.50:22-147.75.109.163:59868.service: Deactivated successfully. Aug 13 07:20:50.759394 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:20:50.760517 systemd-logind[1957]: Removed session 17. Aug 13 07:20:50.784906 systemd[1]: Started sshd@17-172.31.17.50:22-147.75.109.163:59870.service - OpenSSH per-connection server daemon (147.75.109.163:59870). Aug 13 07:20:50.959695 sshd[6757]: Accepted publickey for core from 147.75.109.163 port 59870 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:50.961213 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:50.966297 systemd-logind[1957]: New session 18 of user core. Aug 13 07:20:50.973874 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:20:53.671296 sshd[6757]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:53.707216 systemd[1]: sshd@17-172.31.17.50:22-147.75.109.163:59870.service: Deactivated successfully. Aug 13 07:20:53.710886 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:20:53.713657 systemd-logind[1957]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:20:53.726922 systemd[1]: Started sshd@18-172.31.17.50:22-147.75.109.163:59886.service - OpenSSH per-connection server daemon (147.75.109.163:59886). Aug 13 07:20:53.730907 systemd-logind[1957]: Removed session 18. Aug 13 07:20:53.975196 sshd[6774]: Accepted publickey for core from 147.75.109.163 port 59886 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:53.978186 sshd[6774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:53.984899 systemd-logind[1957]: New session 19 of user core. Aug 13 07:20:53.987865 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:20:55.014814 sshd[6774]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:55.026912 systemd[1]: sshd@18-172.31.17.50:22-147.75.109.163:59886.service: Deactivated successfully. Aug 13 07:20:55.030978 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:20:55.043288 systemd-logind[1957]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:20:55.062213 systemd[1]: Started sshd@19-172.31.17.50:22-147.75.109.163:59890.service - OpenSSH per-connection server daemon (147.75.109.163:59890). Aug 13 07:20:55.064363 systemd-logind[1957]: Removed session 19. Aug 13 07:20:55.278974 sshd[6786]: Accepted publickey for core from 147.75.109.163 port 59890 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:20:55.282539 sshd[6786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:55.290560 systemd-logind[1957]: New session 20 of user core. Aug 13 07:20:55.308949 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:20:55.629339 sshd[6786]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:55.635850 systemd-logind[1957]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:20:55.636960 systemd[1]: sshd@19-172.31.17.50:22-147.75.109.163:59890.service: Deactivated successfully. Aug 13 07:20:55.639403 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:20:55.640352 systemd-logind[1957]: Removed session 20. Aug 13 07:20:57.753747 systemd[1]: run-containerd-runc-k8s.io-1cfb31a5813af885af783974107542d28cb429f94c6197487239416e65a4b82e-runc.K35URC.mount: Deactivated successfully. Aug 13 07:21:00.675989 systemd[1]: Started sshd@20-172.31.17.50:22-147.75.109.163:60568.service - OpenSSH per-connection server daemon (147.75.109.163:60568). Aug 13 07:21:00.988600 sshd[6843]: Accepted publickey for core from 147.75.109.163 port 60568 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:21:00.995643 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:01.008517 systemd-logind[1957]: New session 21 of user core. Aug 13 07:21:01.012921 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:21:02.831006 sshd[6843]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:02.861207 systemd[1]: sshd@20-172.31.17.50:22-147.75.109.163:60568.service: Deactivated successfully. Aug 13 07:21:02.869568 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:21:02.872261 systemd-logind[1957]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:21:02.874075 systemd-logind[1957]: Removed session 21. Aug 13 07:21:04.857663 systemd[1]: run-containerd-runc-k8s.io-0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1-runc.9AMhWH.mount: Deactivated successfully. Aug 13 07:21:07.915382 systemd[1]: Started sshd@21-172.31.17.50:22-147.75.109.163:60574.service - OpenSSH per-connection server daemon (147.75.109.163:60574). Aug 13 07:21:08.275501 sshd[6899]: Accepted publickey for core from 147.75.109.163 port 60574 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:21:08.279358 sshd[6899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:08.298982 systemd-logind[1957]: New session 22 of user core. Aug 13 07:21:08.303939 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:21:09.895042 sshd[6899]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:09.910631 systemd[1]: sshd@21-172.31.17.50:22-147.75.109.163:60574.service: Deactivated successfully. Aug 13 07:21:09.911426 systemd-logind[1957]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:21:09.918569 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:21:09.924264 systemd-logind[1957]: Removed session 22. Aug 13 07:21:14.937998 systemd[1]: Started sshd@22-172.31.17.50:22-147.75.109.163:55142.service - OpenSSH per-connection server daemon (147.75.109.163:55142). Aug 13 07:21:15.257189 sshd[6918]: Accepted publickey for core from 147.75.109.163 port 55142 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:21:15.267396 sshd[6918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:15.276223 systemd-logind[1957]: New session 23 of user core. Aug 13 07:21:15.282899 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:21:16.478002 sshd[6918]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:16.483981 systemd[1]: sshd@22-172.31.17.50:22-147.75.109.163:55142.service: Deactivated successfully. Aug 13 07:21:16.484298 systemd-logind[1957]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:21:16.490059 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:21:16.493123 systemd-logind[1957]: Removed session 23. Aug 13 07:21:21.516026 systemd[1]: Started sshd@23-172.31.17.50:22-147.75.109.163:42452.service - OpenSSH per-connection server daemon (147.75.109.163:42452). Aug 13 07:21:21.746851 sshd[6931]: Accepted publickey for core from 147.75.109.163 port 42452 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:21:21.748391 sshd[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:21.753784 systemd-logind[1957]: New session 24 of user core. Aug 13 07:21:21.762885 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:21:22.476016 sshd[6931]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:22.483262 systemd[1]: sshd@23-172.31.17.50:22-147.75.109.163:42452.service: Deactivated successfully. Aug 13 07:21:22.483470 systemd-logind[1957]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:21:22.487207 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:21:22.498970 systemd-logind[1957]: Removed session 24. Aug 13 07:21:27.546162 systemd[1]: Started sshd@24-172.31.17.50:22-147.75.109.163:42464.service - OpenSSH per-connection server daemon (147.75.109.163:42464). Aug 13 07:21:27.835775 sshd[6967]: Accepted publickey for core from 147.75.109.163 port 42464 ssh2: RSA SHA256:EC/ch/rv0K2dityu9tU4pjM1BVuNBVPshjnRCNB2kiI Aug 13 07:21:27.839900 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:21:27.847226 systemd-logind[1957]: New session 25 of user core. Aug 13 07:21:27.852929 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:21:29.240637 sshd[6967]: pam_unix(sshd:session): session closed for user core Aug 13 07:21:29.246378 systemd[1]: sshd@24-172.31.17.50:22-147.75.109.163:42464.service: Deactivated successfully. Aug 13 07:21:29.250621 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:21:29.258717 systemd-logind[1957]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:21:29.262345 systemd-logind[1957]: Removed session 25. Aug 13 07:21:31.148331 systemd[1]: run-containerd-runc-k8s.io-1016176faf0974610351dd2f1b6015ca2018fa4ed63334e9505cda225218c137-runc.WTfvKS.mount: Deactivated successfully. Aug 13 07:21:43.385371 systemd[1]: cri-containerd-33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4.scope: Deactivated successfully. Aug 13 07:21:43.386274 systemd[1]: cri-containerd-33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4.scope: Consumed 3.600s CPU time, 31.5M memory peak, 0B memory swap peak. Aug 13 07:21:43.694408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4-rootfs.mount: Deactivated successfully. Aug 13 07:21:43.764391 containerd[1979]: time="2025-08-13T07:21:43.733004728Z" level=info msg="shim disconnected" id=33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4 namespace=k8s.io Aug 13 07:21:43.764391 containerd[1979]: time="2025-08-13T07:21:43.764389207Z" level=warning msg="cleaning up after shim disconnected" id=33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4 namespace=k8s.io Aug 13 07:21:43.770895 containerd[1979]: time="2025-08-13T07:21:43.764409787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:21:44.299059 kubelet[3177]: I0813 07:21:44.294267 3177 scope.go:117] "RemoveContainer" containerID="33574102509fc74f9f174607956ffe74c5dcfa7965e1b3877cd12be0a6ea68c4" Aug 13 07:21:44.380508 containerd[1979]: time="2025-08-13T07:21:44.380458716Z" level=info msg="CreateContainer within sandbox \"5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 07:21:44.514254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278822840.mount: Deactivated successfully. Aug 13 07:21:44.531246 containerd[1979]: time="2025-08-13T07:21:44.531184006Z" level=info msg="CreateContainer within sandbox \"5d6575f97fe8f60c5552db978c1d18d8fbe31a827785e704ac2772d355eb2949\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"10efe7251e7b3da901a047fdecbf6d3cacea6e69a610f825e5a622bda8561096\"" Aug 13 07:21:44.531859 containerd[1979]: time="2025-08-13T07:21:44.531829098Z" level=info msg="StartContainer for \"10efe7251e7b3da901a047fdecbf6d3cacea6e69a610f825e5a622bda8561096\"" Aug 13 07:21:44.615914 systemd[1]: Started cri-containerd-10efe7251e7b3da901a047fdecbf6d3cacea6e69a610f825e5a622bda8561096.scope - libcontainer container 10efe7251e7b3da901a047fdecbf6d3cacea6e69a610f825e5a622bda8561096. Aug 13 07:21:44.754325 containerd[1979]: time="2025-08-13T07:21:44.754275154Z" level=info msg="StartContainer for \"10efe7251e7b3da901a047fdecbf6d3cacea6e69a610f825e5a622bda8561096\" returns successfully" Aug 13 07:21:44.817369 systemd[1]: cri-containerd-ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a.scope: Deactivated successfully. Aug 13 07:21:44.817590 systemd[1]: cri-containerd-ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a.scope: Consumed 13.630s CPU time. Aug 13 07:21:44.846032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a-rootfs.mount: Deactivated successfully. Aug 13 07:21:44.847317 containerd[1979]: time="2025-08-13T07:21:44.847066576Z" level=info msg="shim disconnected" id=ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a namespace=k8s.io Aug 13 07:21:44.847317 containerd[1979]: time="2025-08-13T07:21:44.847133335Z" level=warning msg="cleaning up after shim disconnected" id=ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a namespace=k8s.io Aug 13 07:21:44.847317 containerd[1979]: time="2025-08-13T07:21:44.847146686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:21:45.247406 kubelet[3177]: I0813 07:21:45.247377 3177 scope.go:117] "RemoveContainer" containerID="ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a" Aug 13 07:21:45.269638 containerd[1979]: time="2025-08-13T07:21:45.269592963Z" level=info msg="CreateContainer within sandbox \"2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 07:21:45.294392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600720150.mount: Deactivated successfully. Aug 13 07:21:45.301689 containerd[1979]: time="2025-08-13T07:21:45.301508157Z" level=info msg="CreateContainer within sandbox \"2ff91518f863deca9d437709a9d87610ffed83a79e5beb9a5c12cc3e183d28a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00\"" Aug 13 07:21:45.302415 containerd[1979]: time="2025-08-13T07:21:45.302369352Z" level=info msg="StartContainer for \"e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00\"" Aug 13 07:21:45.359216 systemd[1]: Started cri-containerd-e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00.scope - libcontainer container e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00. Aug 13 07:21:45.421628 containerd[1979]: time="2025-08-13T07:21:45.421577381Z" level=info msg="StartContainer for \"e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00\" returns successfully" Aug 13 07:21:48.596537 systemd[1]: cri-containerd-56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779.scope: Deactivated successfully. Aug 13 07:21:48.597269 systemd[1]: cri-containerd-56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779.scope: Consumed 1.745s CPU time, 19.4M memory peak, 0B memory swap peak. Aug 13 07:21:48.624605 containerd[1979]: time="2025-08-13T07:21:48.624397720Z" level=info msg="shim disconnected" id=56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779 namespace=k8s.io Aug 13 07:21:48.624605 containerd[1979]: time="2025-08-13T07:21:48.624479937Z" level=warning msg="cleaning up after shim disconnected" id=56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779 namespace=k8s.io Aug 13 07:21:48.624605 containerd[1979]: time="2025-08-13T07:21:48.624490330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:21:48.625937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779-rootfs.mount: Deactivated successfully. Aug 13 07:21:49.267036 kubelet[3177]: I0813 07:21:49.267004 3177 scope.go:117] "RemoveContainer" containerID="56e379d0978b3b6b5e87097e1be8ef3573d30774b343846b0a47c5e8c3510779" Aug 13 07:21:49.269195 containerd[1979]: time="2025-08-13T07:21:49.269162374Z" level=info msg="CreateContainer within sandbox \"44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 07:21:49.295710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500571550.mount: Deactivated successfully. Aug 13 07:21:49.298456 containerd[1979]: time="2025-08-13T07:21:49.298404196Z" level=info msg="CreateContainer within sandbox \"44cf6041e4b072061498843fd83fedb53115c620988b28eb7dad3a674b722497\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b5e83d02e50da68b4e0cfb73086de886e14a6fd2f36785c3058f6ab73164547d\"" Aug 13 07:21:49.304828 containerd[1979]: time="2025-08-13T07:21:49.298945582Z" level=info msg="StartContainer for \"b5e83d02e50da68b4e0cfb73086de886e14a6fd2f36785c3058f6ab73164547d\"" Aug 13 07:21:49.338061 systemd[1]: Started cri-containerd-b5e83d02e50da68b4e0cfb73086de886e14a6fd2f36785c3058f6ab73164547d.scope - libcontainer container b5e83d02e50da68b4e0cfb73086de886e14a6fd2f36785c3058f6ab73164547d. Aug 13 07:21:49.392235 containerd[1979]: time="2025-08-13T07:21:49.392128778Z" level=info msg="StartContainer for \"b5e83d02e50da68b4e0cfb73086de886e14a6fd2f36785c3058f6ab73164547d\" returns successfully" Aug 13 07:21:49.658544 kubelet[3177]: E0813 07:21:49.656945 3177 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-50?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 07:21:56.287838 systemd[1]: run-containerd-runc-k8s.io-0f9a8f04aaf43cd0ed2476048a54adeb454e2d299ee471c8eca50882a37927e1-runc.unC7T0.mount: Deactivated successfully. Aug 13 07:21:57.122837 systemd[1]: cri-containerd-e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00.scope: Deactivated successfully. Aug 13 07:21:57.152520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00-rootfs.mount: Deactivated successfully. Aug 13 07:21:57.167794 containerd[1979]: time="2025-08-13T07:21:57.167627920Z" level=info msg="shim disconnected" id=e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00 namespace=k8s.io Aug 13 07:21:57.167794 containerd[1979]: time="2025-08-13T07:21:57.167776238Z" level=warning msg="cleaning up after shim disconnected" id=e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00 namespace=k8s.io Aug 13 07:21:57.167794 containerd[1979]: time="2025-08-13T07:21:57.167789783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:21:57.305370 kubelet[3177]: I0813 07:21:57.305303 3177 scope.go:117] "RemoveContainer" containerID="ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a" Aug 13 07:21:57.306127 kubelet[3177]: I0813 07:21:57.306076 3177 scope.go:117] "RemoveContainer" containerID="e3edd0c59561b030e937ef4191416d158966581fe8a0115d45cd203859a4cb00" Aug 13 07:21:57.326338 kubelet[3177]: E0813 07:21:57.325451 3177 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-b4vml_tigera-operator(554b2ead-6c9a-4ab3-a990-38755a393230)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-b4vml" podUID="554b2ead-6c9a-4ab3-a990-38755a393230" Aug 13 07:21:57.411505 containerd[1979]: time="2025-08-13T07:21:57.411450404Z" level=info msg="RemoveContainer for \"ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a\"" Aug 13 07:21:57.453428 containerd[1979]: time="2025-08-13T07:21:57.453160601Z" level=info msg="RemoveContainer for \"ad796729df047ac15aaece664ba4cba30bec24296fca407c63396a503def642a\" returns successfully" Aug 13 07:21:59.673926 kubelet[3177]: E0813 07:21:59.673696 3177 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-50?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"