Aug 13 07:15:16.152107 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:15:16.152147 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:16.152163 kernel: BIOS-provided physical RAM map: Aug 13 07:15:16.152175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:15:16.152185 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 07:15:16.152196 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 07:15:16.152210 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 13 07:15:16.152226 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 13 07:15:16.152237 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 07:15:16.152272 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 07:15:16.152282 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 07:15:16.152292 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 07:15:16.152303 kernel: printk: bootconsole [earlyser0] enabled Aug 13 07:15:16.152313 kernel: NX (Execute Disable) protection: active Aug 13 07:15:16.152343 kernel: APIC: Static calls initialized Aug 13 07:15:16.152367 kernel: efi: EFI v2.7 by Microsoft Aug 13 07:15:16.152390 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Aug 13 07:15:16.152400 kernel: SMBIOS 3.1.0 present. Aug 13 07:15:16.152412 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 07:15:16.152424 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 07:15:16.152436 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 07:15:16.152449 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Aug 13 07:15:16.152462 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 07:15:16.152472 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 07:15:16.152486 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 07:15:16.152498 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:15:16.152512 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:15:16.152525 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 07:15:16.152537 kernel: tsc: Detected 2593.906 MHz processor Aug 13 07:15:16.152548 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:15:16.152560 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:15:16.152572 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 07:15:16.152583 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:15:16.152602 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:15:16.152618 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 07:15:16.152630 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 07:15:16.152640 kernel: Using GB pages for direct mapping Aug 13 07:15:16.152651 kernel: Secure boot disabled Aug 13 07:15:16.152663 kernel: ACPI: Early table checksum verification disabled Aug 13 07:15:16.152676 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 07:15:16.152693 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152708 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152721 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 07:15:16.152733 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 07:15:16.152747 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152760 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152772 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152789 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152804 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152818 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152832 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152846 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 07:15:16.152860 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 07:15:16.152874 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 07:15:16.152888 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 07:15:16.152905 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 07:15:16.152919 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 07:15:16.152933 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 07:15:16.152947 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 07:15:16.152961 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 07:15:16.152975 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 07:15:16.152989 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:15:16.153004 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:15:16.153018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 07:15:16.153035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 07:15:16.153049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 07:15:16.153064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 07:15:16.153078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 07:15:16.153092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 07:15:16.153106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 07:15:16.153121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 07:15:16.153134 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 07:15:16.153146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 07:15:16.153162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 07:15:16.153176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 07:15:16.153190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 07:15:16.153204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 07:15:16.153218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 07:15:16.153231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 07:15:16.159676 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 07:15:16.159698 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 07:15:16.159712 kernel: Zone ranges: Aug 13 07:15:16.159740 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:15:16.159753 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:15:16.159767 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:15:16.159781 kernel: Movable zone start for each node Aug 13 07:15:16.159795 kernel: Early memory node ranges Aug 13 07:15:16.159810 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:15:16.159824 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 07:15:16.159838 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 07:15:16.159852 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:15:16.159870 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 07:15:16.159884 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:15:16.159899 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:15:16.159913 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 07:15:16.159927 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 07:15:16.159941 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 07:15:16.159955 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:15:16.159969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:15:16.159983 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:15:16.160000 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 07:15:16.160015 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:15:16.160029 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 07:15:16.160043 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 07:15:16.160057 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:15:16.160072 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:15:16.160086 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:15:16.160100 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:15:16.160114 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:15:16.160131 kernel: Hyper-V: PV spinlocks enabled Aug 13 07:15:16.160145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:15:16.160162 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:16.160176 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:15:16.160195 kernel: random: crng init done Aug 13 07:15:16.160209 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:15:16.160223 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:15:16.160256 kernel: Fallback order for Node 0: 0 Aug 13 07:15:16.160283 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 07:15:16.160308 kernel: Policy zone: Normal Aug 13 07:15:16.160326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:15:16.160341 kernel: software IO TLB: area num 2. Aug 13 07:15:16.160357 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 310124K reserved, 0K cma-reserved) Aug 13 07:15:16.160372 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:15:16.160387 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:15:16.160402 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:15:16.160417 kernel: Dynamic Preempt: voluntary Aug 13 07:15:16.160433 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:15:16.160449 kernel: rcu: RCU event tracing is enabled. Aug 13 07:15:16.160468 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:15:16.160483 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:15:16.160498 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:15:16.160514 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:15:16.160529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:15:16.160547 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:15:16.160562 kernel: Using NULL legacy PIC Aug 13 07:15:16.160577 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 07:15:16.160592 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:15:16.160607 kernel: Console: colour dummy device 80x25 Aug 13 07:15:16.160622 kernel: printk: console [tty1] enabled Aug 13 07:15:16.160637 kernel: printk: console [ttyS0] enabled Aug 13 07:15:16.160652 kernel: printk: bootconsole [earlyser0] disabled Aug 13 07:15:16.160667 kernel: ACPI: Core revision 20230628 Aug 13 07:15:16.160682 kernel: Failed to register legacy timer interrupt Aug 13 07:15:16.160700 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:15:16.160715 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 07:15:16.160731 kernel: Hyper-V: Using IPI hypercalls Aug 13 07:15:16.160745 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 13 07:15:16.160760 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 13 07:15:16.160776 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 13 07:15:16.160791 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 13 07:15:16.160806 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 13 07:15:16.160821 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 13 07:15:16.160840 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 13 07:15:16.160855 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:15:16.160870 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:15:16.160885 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:15:16.160900 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:15:16.160915 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:15:16.160929 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 07:15:16.160944 kernel: RETBleed: Vulnerable Aug 13 07:15:16.160959 kernel: Speculative Store Bypass: Vulnerable Aug 13 07:15:16.160977 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:15:16.160992 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:15:16.161007 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:15:16.161022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:15:16.161037 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:15:16.161052 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:15:16.161067 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 07:15:16.161082 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 07:15:16.161097 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 07:15:16.161112 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:15:16.161126 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 07:15:16.161143 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 07:15:16.161158 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 07:15:16.161173 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 07:15:16.161188 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:15:16.161203 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:15:16.161218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:15:16.161233 kernel: landlock: Up and running. Aug 13 07:15:16.161281 kernel: SELinux: Initializing. Aug 13 07:15:16.161295 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.161308 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.161329 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 07:15:16.161345 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:15:16.161365 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:15:16.161380 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:15:16.161396 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 07:15:16.161410 kernel: signal: max sigframe size: 3632 Aug 13 07:15:16.161426 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:15:16.161442 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:15:16.161457 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:15:16.161472 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:15:16.161487 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:15:16.161506 kernel: .... node #0, CPUs: #1 Aug 13 07:15:16.161522 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 07:15:16.161538 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:15:16.161553 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:15:16.161569 kernel: smpboot: Max logical packages: 1 Aug 13 07:15:16.161584 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 13 07:15:16.161599 kernel: devtmpfs: initialized Aug 13 07:15:16.161614 kernel: x86/mm: Memory block size: 128MB Aug 13 07:15:16.161632 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 07:15:16.161648 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:15:16.161664 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:15:16.161678 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:15:16.161694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:15:16.161709 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:15:16.161724 kernel: audit: type=2000 audit(1755069314.031:1): state=initialized audit_enabled=0 res=1 Aug 13 07:15:16.161738 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:15:16.161753 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:15:16.161771 kernel: cpuidle: using governor menu Aug 13 07:15:16.161787 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:15:16.161802 kernel: dca service started, version 1.12.1 Aug 13 07:15:16.161817 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 13 07:15:16.161832 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:15:16.161847 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:15:16.161862 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:15:16.161878 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:15:16.161896 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:15:16.161911 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:15:16.161926 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:15:16.161941 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:15:16.161956 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:15:16.161971 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:15:16.161986 kernel: ACPI: Interpreter enabled Aug 13 07:15:16.162001 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:15:16.162016 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:15:16.162032 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:15:16.162050 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:15:16.162065 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 07:15:16.162080 kernel: iommu: Default domain type: Translated Aug 13 07:15:16.162095 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:15:16.162110 kernel: efivars: Registered efivars operations Aug 13 07:15:16.162125 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:15:16.162140 kernel: PCI: System does not support PCI Aug 13 07:15:16.162155 kernel: vgaarb: loaded Aug 13 07:15:16.162170 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 07:15:16.162188 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:15:16.162203 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:15:16.162218 kernel: pnp: PnP ACPI init Aug 13 07:15:16.162233 kernel: pnp: PnP ACPI: found 3 devices Aug 13 07:15:16.162285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:15:16.162298 kernel: NET: Registered PF_INET protocol family Aug 13 07:15:16.162311 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:15:16.162324 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:15:16.162339 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:15:16.162358 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:15:16.162373 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:15:16.162387 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:15:16.162402 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.162417 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.162429 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:15:16.162443 kernel: NET: Registered PF_XDP protocol family Aug 13 07:15:16.162455 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:15:16.162468 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:15:16.162483 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Aug 13 07:15:16.162491 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:15:16.162500 kernel: Initialise system trusted keyrings Aug 13 07:15:16.162508 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:15:16.162516 kernel: Key type asymmetric registered Aug 13 07:15:16.162524 kernel: Asymmetric key parser 'x509' registered Aug 13 07:15:16.162532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:15:16.162541 kernel: io scheduler mq-deadline registered Aug 13 07:15:16.162549 kernel: io scheduler kyber registered Aug 13 07:15:16.162560 kernel: io scheduler bfq registered Aug 13 07:15:16.162568 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:15:16.162576 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:15:16.162584 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:15:16.162592 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:15:16.162601 kernel: i8042: PNP: No PS/2 controller found. Aug 13 07:15:16.162788 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 07:15:16.162920 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T07:15:15 UTC (1755069315) Aug 13 07:15:16.163042 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 07:15:16.163061 kernel: intel_pstate: CPU model not supported Aug 13 07:15:16.163077 kernel: efifb: probing for efifb Aug 13 07:15:16.163092 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 07:15:16.163107 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 07:15:16.163122 kernel: efifb: scrolling: redraw Aug 13 07:15:16.163137 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 07:15:16.163152 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:15:16.163171 kernel: fb0: EFI VGA frame buffer device Aug 13 07:15:16.163186 kernel: pstore: Using crash dump compression: deflate Aug 13 07:15:16.163201 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:15:16.163215 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:15:16.163230 kernel: Segment Routing with IPv6 Aug 13 07:15:16.169438 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:15:16.169465 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:15:16.169483 kernel: Key type dns_resolver registered Aug 13 07:15:16.169500 kernel: IPI shorthand broadcast: enabled Aug 13 07:15:16.169524 kernel: sched_clock: Marking stable (988003600, 57105500)->(1297660000, -252550900) Aug 13 07:15:16.169539 kernel: registered taskstats version 1 Aug 13 07:15:16.169554 kernel: Loading compiled-in X.509 certificates Aug 13 07:15:16.169570 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:15:16.169585 kernel: Key type .fscrypt registered Aug 13 07:15:16.169600 kernel: Key type fscrypt-provisioning registered Aug 13 07:15:16.169616 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:15:16.169631 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:15:16.169647 kernel: ima: No architecture policies found Aug 13 07:15:16.169666 kernel: clk: Disabling unused clocks Aug 13 07:15:16.169681 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:15:16.169697 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:15:16.169713 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:15:16.169732 kernel: Run /init as init process Aug 13 07:15:16.169747 kernel: with arguments: Aug 13 07:15:16.169762 kernel: /init Aug 13 07:15:16.169778 kernel: with environment: Aug 13 07:15:16.169793 kernel: HOME=/ Aug 13 07:15:16.169810 kernel: TERM=linux Aug 13 07:15:16.169825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:15:16.169845 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:15:16.169864 systemd[1]: Detected virtualization microsoft. Aug 13 07:15:16.169880 systemd[1]: Detected architecture x86-64. Aug 13 07:15:16.169895 systemd[1]: Running in initrd. Aug 13 07:15:16.169911 systemd[1]: No hostname configured, using default hostname. Aug 13 07:15:16.169926 systemd[1]: Hostname set to . Aug 13 07:15:16.169946 systemd[1]: Initializing machine ID from random generator. Aug 13 07:15:16.169962 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:15:16.169979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:15:16.169995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:15:16.170013 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:15:16.170029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:15:16.170045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:15:16.170062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:15:16.170083 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:15:16.170100 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:15:16.170116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:15:16.170132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:16.170148 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:15:16.170165 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:15:16.170184 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:15:16.170200 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:15:16.170216 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:15:16.170232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:15:16.170263 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:15:16.170280 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:15:16.170297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:15:16.170313 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:15:16.170329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:15:16.170349 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:15:16.170365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:15:16.170382 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:15:16.170398 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:15:16.170414 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:15:16.170431 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:15:16.170446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:15:16.170464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:16.170516 systemd-journald[176]: Collecting audit messages is disabled. Aug 13 07:15:16.170557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:15:16.170574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:15:16.170590 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:15:16.170611 systemd-journald[176]: Journal started Aug 13 07:15:16.170649 systemd-journald[176]: Runtime Journal (/run/log/journal/640a2a69cc0842b2a70bbe2c20dd5a9b) is 8.0M, max 158.8M, 150.8M free. Aug 13 07:15:16.152637 systemd-modules-load[177]: Inserted module 'overlay' Aug 13 07:15:16.183301 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:15:16.203713 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:15:16.200226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:15:16.209450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:15:16.216257 kernel: Bridge firewalling registered Aug 13 07:15:16.216430 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 13 07:15:16.218910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:15:16.228463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:16.231822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:15:16.248486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:16.252432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:15:16.262413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:15:16.262852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:16.279288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:15:16.288427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:15:16.289442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:16.292504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:16.304124 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:15:16.323973 dracut-cmdline[212]: dracut-dracut-053 Aug 13 07:15:16.328040 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:16.341746 systemd-resolved[205]: Positive Trust Anchors: Aug 13 07:15:16.341760 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:15:16.341807 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:15:16.345370 systemd-resolved[205]: Defaulting to hostname 'linux'. Aug 13 07:15:16.346686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:15:16.368183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:16.418270 kernel: SCSI subsystem initialized Aug 13 07:15:16.428269 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:15:16.439270 kernel: iscsi: registered transport (tcp) Aug 13 07:15:16.460710 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:15:16.460779 kernel: QLogic iSCSI HBA Driver Aug 13 07:15:16.497702 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:15:16.506401 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:15:16.532603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:15:16.532700 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:15:16.537268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:15:16.576267 kernel: raid6: avx512x4 gen() 18304 MB/s Aug 13 07:15:16.595257 kernel: raid6: avx512x2 gen() 18193 MB/s Aug 13 07:15:16.614256 kernel: raid6: avx512x1 gen() 18310 MB/s Aug 13 07:15:16.633253 kernel: raid6: avx2x4 gen() 18134 MB/s Aug 13 07:15:16.653256 kernel: raid6: avx2x2 gen() 17953 MB/s Aug 13 07:15:16.672953 kernel: raid6: avx2x1 gen() 13756 MB/s Aug 13 07:15:16.672984 kernel: raid6: using algorithm avx512x1 gen() 18310 MB/s Aug 13 07:15:16.693892 kernel: raid6: .... xor() 26067 MB/s, rmw enabled Aug 13 07:15:16.693956 kernel: raid6: using avx512x2 recovery algorithm Aug 13 07:15:16.716267 kernel: xor: automatically using best checksumming function avx Aug 13 07:15:16.864271 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:15:16.873697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:15:16.881434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:16.906284 systemd-udevd[395]: Using default interface naming scheme 'v255'. Aug 13 07:15:16.910943 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:16.926774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:15:16.939067 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 13 07:15:16.968493 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:15:16.975545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:15:17.017599 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:15:17.029493 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:15:17.064767 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:15:17.071558 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:15:17.075191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:15:17.081180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:15:17.093656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:15:17.120285 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:15:17.128932 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:15:17.136289 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 07:15:17.147757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:15:17.147924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:17.158615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:17.167521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:17.167790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:17.175238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:17.187048 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 07:15:17.192260 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 07:15:17.192311 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 07:15:17.192326 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 07:15:17.197687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:17.213290 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 07:15:17.214476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:17.226348 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:15:17.226393 kernel: AES CTR mode by8 optimization enabled Aug 13 07:15:17.214629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:17.233022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:17.241331 kernel: PTP clock support registered Aug 13 07:15:17.284563 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:15:17.284631 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 07:15:17.285728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:17.301953 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 07:15:17.301977 kernel: hv_vmbus: registering driver hv_utils Aug 13 07:15:17.301991 kernel: scsi host1: storvsc_host_t Aug 13 07:15:17.302170 kernel: scsi host0: storvsc_host_t Aug 13 07:15:17.302304 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 07:15:17.302330 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 07:15:17.310900 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 07:15:17.310955 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 07:15:17.310973 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 07:15:17.310240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:17.512307 systemd-resolved[205]: Clock change detected. Flushing caches. Aug 13 07:15:17.536656 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 07:15:17.543607 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 07:15:17.550383 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 07:15:17.555179 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 07:15:17.555536 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:15:17.562741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:17.572772 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 07:15:17.585665 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 07:15:17.585928 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 07:15:17.587707 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:15:17.590689 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 07:15:17.590896 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 07:15:17.601223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:17.601282 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:15:17.636277 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: VF slot 1 added Aug 13 07:15:17.646218 kernel: hv_vmbus: registering driver hv_pci Aug 13 07:15:17.646314 kernel: hv_pci 47e48a7e-3839-4f94-9f36-12fbcb3bcee7: PCI VMBus probing: Using version 0x10004 Aug 13 07:15:17.649681 kernel: hv_pci 47e48a7e-3839-4f94-9f36-12fbcb3bcee7: PCI host bridge to bus 3839:00 Aug 13 07:15:17.653426 kernel: pci_bus 3839:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 07:15:17.654278 kernel: pci_bus 3839:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 07:15:17.661606 kernel: pci 3839:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 07:15:17.667290 kernel: pci 3839:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:15:17.671367 kernel: pci 3839:00:02.0: enabling Extended Tags Aug 13 07:15:17.685383 kernel: pci 3839:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3839:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 07:15:17.693728 kernel: pci_bus 3839:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 07:15:17.694143 kernel: pci 3839:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:15:17.884624 kernel: mlx5_core 3839:00:02.0: enabling device (0000 -> 0002) Aug 13 07:15:17.889282 kernel: mlx5_core 3839:00:02.0: firmware version: 14.30.5000 Aug 13 07:15:18.108472 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: VF registering: eth1 Aug 13 07:15:18.108878 kernel: mlx5_core 3839:00:02.0 eth1: joined to eth0 Aug 13 07:15:18.114842 kernel: mlx5_core 3839:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 07:15:18.125285 kernel: mlx5_core 3839:00:02.0 enP14393s1: renamed from eth1 Aug 13 07:15:18.176221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 07:15:18.230282 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Aug 13 07:15:18.245717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:15:18.279828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 07:15:18.392402 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (442) Aug 13 07:15:18.406735 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 07:15:18.412953 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 07:15:18.425435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:15:18.442287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:18.453289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:18.461280 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:19.461275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:19.463196 disk-uuid[602]: The operation has completed successfully. Aug 13 07:15:19.558711 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:15:19.558843 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:15:19.592599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:15:19.602315 sh[715]: Success Aug 13 07:15:19.625290 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:15:20.376701 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:15:20.388380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:15:20.394011 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:15:20.412170 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:15:20.412273 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:20.415675 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:15:20.418301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:15:20.420647 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:15:21.352894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:15:21.357871 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:15:21.369565 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:15:21.375444 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:15:21.402563 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:21.402624 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:21.402645 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:15:21.456637 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:15:21.469489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:15:21.491375 systemd-networkd[889]: lo: Link UP Aug 13 07:15:21.491385 systemd-networkd[889]: lo: Gained carrier Aug 13 07:15:21.493751 systemd-networkd[889]: Enumeration completed Aug 13 07:15:21.493994 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:15:21.496579 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:21.496582 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:15:21.498025 systemd[1]: Reached target network.target - Network. Aug 13 07:15:21.555280 kernel: mlx5_core 3839:00:02.0 enP14393s1: Link up Aug 13 07:15:21.560284 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:15:21.568493 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:15:21.573803 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:21.578193 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:15:21.587270 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: Data path switched to VF: enP14393s1 Aug 13 07:15:21.587980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:15:21.588179 systemd-networkd[889]: enP14393s1: Link UP Aug 13 07:15:21.588319 systemd-networkd[889]: eth0: Link UP Aug 13 07:15:21.588471 systemd-networkd[889]: eth0: Gained carrier Aug 13 07:15:21.588483 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:21.596053 systemd-networkd[889]: enP14393s1: Gained carrier Aug 13 07:15:21.625322 systemd-networkd[889]: eth0: DHCPv4 address 10.200.4.46/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:15:23.180460 systemd-networkd[889]: eth0: Gained IPv6LL Aug 13 07:15:23.695602 ignition[900]: Ignition 2.19.0 Aug 13 07:15:23.695613 ignition[900]: Stage: fetch-offline Aug 13 07:15:23.695657 ignition[900]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.695668 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.695789 ignition[900]: parsed url from cmdline: "" Aug 13 07:15:23.695795 ignition[900]: no config URL provided Aug 13 07:15:23.695803 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:15:23.695815 ignition[900]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:15:23.695822 ignition[900]: failed to fetch config: resource requires networking Aug 13 07:15:23.719136 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:15:23.698209 ignition[900]: Ignition finished successfully Aug 13 07:15:23.736429 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:15:23.759338 ignition[909]: Ignition 2.19.0 Aug 13 07:15:23.759351 ignition[909]: Stage: fetch Aug 13 07:15:23.759576 ignition[909]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.759589 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.759805 ignition[909]: parsed url from cmdline: "" Aug 13 07:15:23.759809 ignition[909]: no config URL provided Aug 13 07:15:23.759817 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:15:23.760009 ignition[909]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:15:23.760427 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 07:15:23.887329 ignition[909]: GET result: OK Aug 13 07:15:23.887434 ignition[909]: config has been read from IMDS userdata Aug 13 07:15:23.887467 ignition[909]: parsing config with SHA512: 5ee818102767a928a411da12a993dff72c56e3ee54a35052a4422d12d00cc73dd664f73bd9f1ad028ca7ddfa66b548bbee1d288dbf618b279764cd13282874aa Aug 13 07:15:23.898360 unknown[909]: fetched base config from "system" Aug 13 07:15:23.898811 ignition[909]: fetch: fetch complete Aug 13 07:15:23.898373 unknown[909]: fetched base config from "system" Aug 13 07:15:23.898818 ignition[909]: fetch: fetch passed Aug 13 07:15:23.898379 unknown[909]: fetched user config from "azure" Aug 13 07:15:23.898857 ignition[909]: Ignition finished successfully Aug 13 07:15:23.902225 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:15:23.922619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:15:23.938317 ignition[915]: Ignition 2.19.0 Aug 13 07:15:23.938328 ignition[915]: Stage: kargs Aug 13 07:15:23.940554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:15:23.938560 ignition[915]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.938574 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.939471 ignition[915]: kargs: kargs passed Aug 13 07:15:23.939515 ignition[915]: Ignition finished successfully Aug 13 07:15:23.956608 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:15:23.973440 ignition[921]: Ignition 2.19.0 Aug 13 07:15:23.973451 ignition[921]: Stage: disks Aug 13 07:15:23.973677 ignition[921]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.975626 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:15:23.973692 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.979288 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:15:23.974602 ignition[921]: disks: disks passed Aug 13 07:15:23.988418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:15:23.974650 ignition[921]: Ignition finished successfully Aug 13 07:15:23.995450 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:15:24.004009 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:15:24.006568 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:15:24.018451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:15:24.058605 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 07:15:24.067039 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:15:24.084836 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:15:24.176272 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:15:24.176831 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:15:24.181136 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:15:24.249406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:15:24.267272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Aug 13 07:15:24.271275 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:24.271319 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:24.275723 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:15:24.282274 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:15:24.287364 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:15:24.292348 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:15:24.298052 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:15:24.298176 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:15:24.311151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:15:24.313537 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:15:24.327417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:15:25.629907 coreos-metadata[957]: Aug 13 07:15:25.629 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:15:25.637594 coreos-metadata[957]: Aug 13 07:15:25.637 INFO Fetch successful Aug 13 07:15:25.637594 coreos-metadata[957]: Aug 13 07:15:25.637 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:15:25.648778 coreos-metadata[957]: Aug 13 07:15:25.648 INFO Fetch successful Aug 13 07:15:25.653800 coreos-metadata[957]: Aug 13 07:15:25.653 INFO wrote hostname ci-4081.3.5-a-7346cb15f0 to /sysroot/etc/hostname Aug 13 07:15:25.659498 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:15:25.957684 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:15:26.056945 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:15:26.107400 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:15:26.198022 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:15:27.758525 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:15:27.772385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:15:27.775926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:15:27.789437 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:15:27.794802 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:27.824371 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:15:27.832802 ignition[1059]: INFO : Ignition 2.19.0 Aug 13 07:15:27.832802 ignition[1059]: INFO : Stage: mount Aug 13 07:15:27.836564 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:27.836564 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:27.836564 ignition[1059]: INFO : mount: mount passed Aug 13 07:15:27.836564 ignition[1059]: INFO : Ignition finished successfully Aug 13 07:15:27.835578 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:15:27.856369 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:15:27.865179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:15:27.934291 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1070) Aug 13 07:15:27.942860 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:27.942933 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:27.945212 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:15:27.951280 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:15:27.953325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:15:27.983058 ignition[1087]: INFO : Ignition 2.19.0 Aug 13 07:15:27.985455 ignition[1087]: INFO : Stage: files Aug 13 07:15:27.985455 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:27.985455 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:27.985455 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:15:27.995608 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:15:27.995608 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:15:28.244247 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:15:28.255655 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:15:28.255655 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:15:28.249588 unknown[1087]: wrote ssh authorized keys file for user: core Aug 13 07:15:28.322970 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:15:28.330498 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:15:28.469299 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:15:28.693726 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:15:28.693726 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:15:29.188460 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:15:30.272080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:30.272080 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: files passed Aug 13 07:15:30.300191 ignition[1087]: INFO : Ignition finished successfully Aug 13 07:15:30.282618 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:15:30.306311 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:15:30.338418 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:15:30.348244 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:15:30.352047 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:15:30.369038 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:15:30.369038 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:15:30.389134 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:15:30.381988 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:15:30.389282 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:15:30.411559 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:15:30.437490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:15:30.437609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:15:30.443970 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:15:30.451434 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:15:30.458122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:15:30.481506 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:15:30.503136 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:15:30.517561 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:15:30.534802 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:15:30.534922 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:15:30.545086 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:30.548691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:15:30.554402 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:15:30.559564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:15:30.559651 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:15:30.565071 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:15:30.570022 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:15:30.570108 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:15:30.570980 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:15:30.571404 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:15:30.571813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:15:30.572280 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:15:30.572692 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:15:30.573083 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:15:30.573953 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:15:30.574366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:15:30.574430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:15:30.575226 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:30.575631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:15:30.576026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:15:30.601122 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:15:30.608606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:15:30.608699 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:15:30.613795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:15:30.613846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:15:30.619152 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:15:30.619213 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:15:30.624379 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:15:30.626969 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:15:30.689404 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:15:30.696357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:15:30.704286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:15:30.704359 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:15:30.709801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:15:30.709864 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:15:30.726713 ignition[1140]: INFO : Ignition 2.19.0 Aug 13 07:15:30.726713 ignition[1140]: INFO : Stage: umount Aug 13 07:15:30.726713 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:30.726713 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:30.741121 ignition[1140]: INFO : umount: umount passed Aug 13 07:15:30.741121 ignition[1140]: INFO : Ignition finished successfully Aug 13 07:15:30.728885 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:15:30.729054 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:15:30.733244 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:15:30.733307 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:15:30.741171 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:15:30.741228 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:15:30.745639 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:15:30.745686 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:15:30.750152 systemd[1]: Stopped target network.target - Network. Aug 13 07:15:30.756766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:15:30.756838 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:15:30.759656 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:15:30.764073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:15:30.770539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:15:30.774103 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:15:30.804889 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:15:30.807323 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:15:30.807379 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:15:30.811588 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:15:30.811638 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:15:30.821889 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:15:30.821964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:15:30.829173 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:15:30.829245 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:15:30.834334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:15:30.838877 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:15:30.844849 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:15:30.845432 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:15:30.845518 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:15:30.848758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:15:30.848858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:15:30.863317 systemd-networkd[889]: eth0: DHCPv6 lease lost Aug 13 07:15:30.867781 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:15:30.868123 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:15:30.874031 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:15:30.874203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:15:30.884010 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:15:30.884072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:15:30.896365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:15:30.900681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:15:30.900751 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:15:30.908799 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:15:30.908859 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:15:30.915960 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:15:30.916018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:15:30.920702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:15:30.920757 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:30.931029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:30.942623 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:15:30.942792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:30.948320 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:15:30.948397 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:15:30.953436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:15:30.953483 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:15:30.965345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:15:30.965412 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:15:30.972620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:15:30.972690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:15:30.992277 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: Data path switched from VF: enP14393s1 Aug 13 07:15:30.993426 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:15:30.993516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:31.006414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:15:31.009090 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:15:31.009159 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:31.014660 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:15:31.014710 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:15:31.022711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:15:31.022768 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:15:31.040690 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:31.040761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:31.052189 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:15:31.052324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:15:31.061370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:15:31.061491 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:15:31.067073 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:15:31.086667 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:15:31.112056 systemd[1]: Switching root. Aug 13 07:15:31.200674 systemd-journald[176]: Journal stopped Aug 13 07:15:16.152107 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:15:16.152147 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:16.152163 kernel: BIOS-provided physical RAM map: Aug 13 07:15:16.152175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 07:15:16.152185 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 07:15:16.152196 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 07:15:16.152210 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 13 07:15:16.152226 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 13 07:15:16.152237 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 13 07:15:16.152272 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 07:15:16.152282 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 07:15:16.152292 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 07:15:16.152303 kernel: printk: bootconsole [earlyser0] enabled Aug 13 07:15:16.152313 kernel: NX (Execute Disable) protection: active Aug 13 07:15:16.152343 kernel: APIC: Static calls initialized Aug 13 07:15:16.152367 kernel: efi: EFI v2.7 by Microsoft Aug 13 07:15:16.152390 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Aug 13 07:15:16.152400 kernel: SMBIOS 3.1.0 present. Aug 13 07:15:16.152412 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 13 07:15:16.152424 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 07:15:16.152436 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 07:15:16.152449 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Aug 13 07:15:16.152462 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 07:15:16.152472 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 07:15:16.152486 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 07:15:16.152498 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:15:16.152512 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 07:15:16.152525 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 07:15:16.152537 kernel: tsc: Detected 2593.906 MHz processor Aug 13 07:15:16.152548 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:15:16.152560 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:15:16.152572 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 07:15:16.152583 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 07:15:16.152602 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:15:16.152618 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 07:15:16.152630 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 07:15:16.152640 kernel: Using GB pages for direct mapping Aug 13 07:15:16.152651 kernel: Secure boot disabled Aug 13 07:15:16.152663 kernel: ACPI: Early table checksum verification disabled Aug 13 07:15:16.152676 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 07:15:16.152693 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152708 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152721 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 13 07:15:16.152733 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 07:15:16.152747 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152760 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152772 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152789 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152804 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152818 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152832 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 07:15:16.152846 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 07:15:16.152860 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 13 07:15:16.152874 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 07:15:16.152888 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 07:15:16.152905 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 07:15:16.152919 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 07:15:16.152933 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 07:15:16.152947 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 07:15:16.152961 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 07:15:16.152975 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 13 07:15:16.152989 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:15:16.153004 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:15:16.153018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 07:15:16.153035 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 07:15:16.153049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 07:15:16.153064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 07:15:16.153078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 07:15:16.153092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 07:15:16.153106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 07:15:16.153121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 07:15:16.153134 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 07:15:16.153146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 07:15:16.153162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 07:15:16.153176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 07:15:16.153190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 07:15:16.153204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 07:15:16.153218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 07:15:16.153231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 07:15:16.159676 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 07:15:16.159698 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 07:15:16.159712 kernel: Zone ranges: Aug 13 07:15:16.159740 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:15:16.159753 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 07:15:16.159767 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:15:16.159781 kernel: Movable zone start for each node Aug 13 07:15:16.159795 kernel: Early memory node ranges Aug 13 07:15:16.159810 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 07:15:16.159824 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 07:15:16.159838 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 07:15:16.159852 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 07:15:16.159870 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 07:15:16.159884 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:15:16.159899 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 07:15:16.159913 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 13 07:15:16.159927 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 07:15:16.159941 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 07:15:16.159955 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:15:16.159969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:15:16.159983 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:15:16.160000 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 07:15:16.160015 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:15:16.160029 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 07:15:16.160043 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 07:15:16.160057 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:15:16.160072 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:15:16.160086 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:15:16.160100 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:15:16.160114 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:15:16.160131 kernel: Hyper-V: PV spinlocks enabled Aug 13 07:15:16.160145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:15:16.160162 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:16.160176 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:15:16.160195 kernel: random: crng init done Aug 13 07:15:16.160209 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 07:15:16.160223 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:15:16.160256 kernel: Fallback order for Node 0: 0 Aug 13 07:15:16.160283 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 13 07:15:16.160308 kernel: Policy zone: Normal Aug 13 07:15:16.160326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:15:16.160341 kernel: software IO TLB: area num 2. Aug 13 07:15:16.160357 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 310124K reserved, 0K cma-reserved) Aug 13 07:15:16.160372 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:15:16.160387 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:15:16.160402 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:15:16.160417 kernel: Dynamic Preempt: voluntary Aug 13 07:15:16.160433 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:15:16.160449 kernel: rcu: RCU event tracing is enabled. Aug 13 07:15:16.160468 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:15:16.160483 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:15:16.160498 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:15:16.160514 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:15:16.160529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:15:16.160547 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:15:16.160562 kernel: Using NULL legacy PIC Aug 13 07:15:16.160577 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 07:15:16.160592 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:15:16.160607 kernel: Console: colour dummy device 80x25 Aug 13 07:15:16.160622 kernel: printk: console [tty1] enabled Aug 13 07:15:16.160637 kernel: printk: console [ttyS0] enabled Aug 13 07:15:16.160652 kernel: printk: bootconsole [earlyser0] disabled Aug 13 07:15:16.160667 kernel: ACPI: Core revision 20230628 Aug 13 07:15:16.160682 kernel: Failed to register legacy timer interrupt Aug 13 07:15:16.160700 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:15:16.160715 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 07:15:16.160731 kernel: Hyper-V: Using IPI hypercalls Aug 13 07:15:16.160745 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 13 07:15:16.160760 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 13 07:15:16.160776 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 13 07:15:16.160791 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 13 07:15:16.160806 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 13 07:15:16.160821 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 13 07:15:16.160840 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 13 07:15:16.160855 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 07:15:16.160870 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 07:15:16.160885 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:15:16.160900 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:15:16.160915 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:15:16.160929 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 07:15:16.160944 kernel: RETBleed: Vulnerable Aug 13 07:15:16.160959 kernel: Speculative Store Bypass: Vulnerable Aug 13 07:15:16.160977 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:15:16.160992 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:15:16.161007 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:15:16.161022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:15:16.161037 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:15:16.161052 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:15:16.161067 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 07:15:16.161082 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 07:15:16.161097 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 07:15:16.161112 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:15:16.161126 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 07:15:16.161143 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 07:15:16.161158 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 07:15:16.161173 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 07:15:16.161188 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:15:16.161203 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:15:16.161218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:15:16.161233 kernel: landlock: Up and running. Aug 13 07:15:16.161281 kernel: SELinux: Initializing. Aug 13 07:15:16.161295 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.161308 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.161329 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 07:15:16.161345 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:15:16.161365 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:15:16.161380 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:15:16.161396 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 07:15:16.161410 kernel: signal: max sigframe size: 3632 Aug 13 07:15:16.161426 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:15:16.161442 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:15:16.161457 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:15:16.161472 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:15:16.161487 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:15:16.161506 kernel: .... node #0, CPUs: #1 Aug 13 07:15:16.161522 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 07:15:16.161538 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 07:15:16.161553 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:15:16.161569 kernel: smpboot: Max logical packages: 1 Aug 13 07:15:16.161584 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 13 07:15:16.161599 kernel: devtmpfs: initialized Aug 13 07:15:16.161614 kernel: x86/mm: Memory block size: 128MB Aug 13 07:15:16.161632 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 07:15:16.161648 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:15:16.161664 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:15:16.161678 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:15:16.161694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:15:16.161709 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:15:16.161724 kernel: audit: type=2000 audit(1755069314.031:1): state=initialized audit_enabled=0 res=1 Aug 13 07:15:16.161738 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:15:16.161753 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:15:16.161771 kernel: cpuidle: using governor menu Aug 13 07:15:16.161787 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:15:16.161802 kernel: dca service started, version 1.12.1 Aug 13 07:15:16.161817 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 13 07:15:16.161832 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:15:16.161847 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:15:16.161862 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:15:16.161878 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:15:16.161896 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:15:16.161911 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:15:16.161926 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:15:16.161941 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:15:16.161956 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:15:16.161971 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:15:16.161986 kernel: ACPI: Interpreter enabled Aug 13 07:15:16.162001 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:15:16.162016 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:15:16.162032 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:15:16.162050 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 07:15:16.162065 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 07:15:16.162080 kernel: iommu: Default domain type: Translated Aug 13 07:15:16.162095 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:15:16.162110 kernel: efivars: Registered efivars operations Aug 13 07:15:16.162125 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:15:16.162140 kernel: PCI: System does not support PCI Aug 13 07:15:16.162155 kernel: vgaarb: loaded Aug 13 07:15:16.162170 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 07:15:16.162188 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:15:16.162203 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:15:16.162218 kernel: pnp: PnP ACPI init Aug 13 07:15:16.162233 kernel: pnp: PnP ACPI: found 3 devices Aug 13 07:15:16.162285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:15:16.162298 kernel: NET: Registered PF_INET protocol family Aug 13 07:15:16.162311 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:15:16.162324 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 07:15:16.162339 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:15:16.162358 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:15:16.162373 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 07:15:16.162387 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 07:15:16.162402 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.162417 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 07:15:16.162429 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:15:16.162443 kernel: NET: Registered PF_XDP protocol family Aug 13 07:15:16.162455 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:15:16.162468 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:15:16.162483 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Aug 13 07:15:16.162491 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:15:16.162500 kernel: Initialise system trusted keyrings Aug 13 07:15:16.162508 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 07:15:16.162516 kernel: Key type asymmetric registered Aug 13 07:15:16.162524 kernel: Asymmetric key parser 'x509' registered Aug 13 07:15:16.162532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:15:16.162541 kernel: io scheduler mq-deadline registered Aug 13 07:15:16.162549 kernel: io scheduler kyber registered Aug 13 07:15:16.162560 kernel: io scheduler bfq registered Aug 13 07:15:16.162568 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:15:16.162576 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:15:16.162584 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:15:16.162592 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 07:15:16.162601 kernel: i8042: PNP: No PS/2 controller found. Aug 13 07:15:16.162788 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 07:15:16.162920 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T07:15:15 UTC (1755069315) Aug 13 07:15:16.163042 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 07:15:16.163061 kernel: intel_pstate: CPU model not supported Aug 13 07:15:16.163077 kernel: efifb: probing for efifb Aug 13 07:15:16.163092 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 07:15:16.163107 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 07:15:16.163122 kernel: efifb: scrolling: redraw Aug 13 07:15:16.163137 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 07:15:16.163152 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:15:16.163171 kernel: fb0: EFI VGA frame buffer device Aug 13 07:15:16.163186 kernel: pstore: Using crash dump compression: deflate Aug 13 07:15:16.163201 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 07:15:16.163215 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:15:16.163230 kernel: Segment Routing with IPv6 Aug 13 07:15:16.169438 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:15:16.169465 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:15:16.169483 kernel: Key type dns_resolver registered Aug 13 07:15:16.169500 kernel: IPI shorthand broadcast: enabled Aug 13 07:15:16.169524 kernel: sched_clock: Marking stable (988003600, 57105500)->(1297660000, -252550900) Aug 13 07:15:16.169539 kernel: registered taskstats version 1 Aug 13 07:15:16.169554 kernel: Loading compiled-in X.509 certificates Aug 13 07:15:16.169570 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:15:16.169585 kernel: Key type .fscrypt registered Aug 13 07:15:16.169600 kernel: Key type fscrypt-provisioning registered Aug 13 07:15:16.169616 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:15:16.169631 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:15:16.169647 kernel: ima: No architecture policies found Aug 13 07:15:16.169666 kernel: clk: Disabling unused clocks Aug 13 07:15:16.169681 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:15:16.169697 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:15:16.169713 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:15:16.169732 kernel: Run /init as init process Aug 13 07:15:16.169747 kernel: with arguments: Aug 13 07:15:16.169762 kernel: /init Aug 13 07:15:16.169778 kernel: with environment: Aug 13 07:15:16.169793 kernel: HOME=/ Aug 13 07:15:16.169810 kernel: TERM=linux Aug 13 07:15:16.169825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:15:16.169845 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:15:16.169864 systemd[1]: Detected virtualization microsoft. Aug 13 07:15:16.169880 systemd[1]: Detected architecture x86-64. Aug 13 07:15:16.169895 systemd[1]: Running in initrd. Aug 13 07:15:16.169911 systemd[1]: No hostname configured, using default hostname. Aug 13 07:15:16.169926 systemd[1]: Hostname set to . Aug 13 07:15:16.169946 systemd[1]: Initializing machine ID from random generator. Aug 13 07:15:16.169962 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:15:16.169979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:15:16.169995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:15:16.170013 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:15:16.170029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:15:16.170045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:15:16.170062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:15:16.170083 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:15:16.170100 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:15:16.170116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:15:16.170132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:16.170148 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:15:16.170165 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:15:16.170184 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:15:16.170200 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:15:16.170216 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:15:16.170232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:15:16.170263 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:15:16.170280 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:15:16.170297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:15:16.170313 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:15:16.170329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:15:16.170349 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:15:16.170365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:15:16.170382 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:15:16.170398 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:15:16.170414 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:15:16.170431 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:15:16.170446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:15:16.170464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:16.170516 systemd-journald[176]: Collecting audit messages is disabled. Aug 13 07:15:16.170557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:15:16.170574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:15:16.170590 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:15:16.170611 systemd-journald[176]: Journal started Aug 13 07:15:16.170649 systemd-journald[176]: Runtime Journal (/run/log/journal/640a2a69cc0842b2a70bbe2c20dd5a9b) is 8.0M, max 158.8M, 150.8M free. Aug 13 07:15:16.152637 systemd-modules-load[177]: Inserted module 'overlay' Aug 13 07:15:16.183301 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:15:16.203713 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:15:16.200226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:15:16.209450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:15:16.216257 kernel: Bridge firewalling registered Aug 13 07:15:16.216430 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 13 07:15:16.218910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:15:16.228463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:16.231822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:15:16.248486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:16.252432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:15:16.262413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:15:16.262852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:16.279288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:15:16.288427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:15:16.289442 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:16.292504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:16.304124 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:15:16.323973 dracut-cmdline[212]: dracut-dracut-053 Aug 13 07:15:16.328040 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:15:16.341746 systemd-resolved[205]: Positive Trust Anchors: Aug 13 07:15:16.341760 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:15:16.341807 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:15:16.345370 systemd-resolved[205]: Defaulting to hostname 'linux'. Aug 13 07:15:16.346686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:15:16.368183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:16.418270 kernel: SCSI subsystem initialized Aug 13 07:15:16.428269 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:15:16.439270 kernel: iscsi: registered transport (tcp) Aug 13 07:15:16.460710 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:15:16.460779 kernel: QLogic iSCSI HBA Driver Aug 13 07:15:16.497702 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:15:16.506401 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:15:16.532603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:15:16.532700 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:15:16.537268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:15:16.576267 kernel: raid6: avx512x4 gen() 18304 MB/s Aug 13 07:15:16.595257 kernel: raid6: avx512x2 gen() 18193 MB/s Aug 13 07:15:16.614256 kernel: raid6: avx512x1 gen() 18310 MB/s Aug 13 07:15:16.633253 kernel: raid6: avx2x4 gen() 18134 MB/s Aug 13 07:15:16.653256 kernel: raid6: avx2x2 gen() 17953 MB/s Aug 13 07:15:16.672953 kernel: raid6: avx2x1 gen() 13756 MB/s Aug 13 07:15:16.672984 kernel: raid6: using algorithm avx512x1 gen() 18310 MB/s Aug 13 07:15:16.693892 kernel: raid6: .... xor() 26067 MB/s, rmw enabled Aug 13 07:15:16.693956 kernel: raid6: using avx512x2 recovery algorithm Aug 13 07:15:16.716267 kernel: xor: automatically using best checksumming function avx Aug 13 07:15:16.864271 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:15:16.873697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:15:16.881434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:16.906284 systemd-udevd[395]: Using default interface naming scheme 'v255'. Aug 13 07:15:16.910943 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:16.926774 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:15:16.939067 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 13 07:15:16.968493 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:15:16.975545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:15:17.017599 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:15:17.029493 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:15:17.064767 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:15:17.071558 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:15:17.075191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:15:17.081180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:15:17.093656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:15:17.120285 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:15:17.128932 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:15:17.136289 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 07:15:17.147757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:15:17.147924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:17.158615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:17.167521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:17.167790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:17.175238 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:17.187048 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 07:15:17.192260 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 07:15:17.192311 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 07:15:17.192326 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 07:15:17.197687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:17.213290 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 07:15:17.214476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:17.226348 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:15:17.226393 kernel: AES CTR mode by8 optimization enabled Aug 13 07:15:17.214629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:17.233022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:17.241331 kernel: PTP clock support registered Aug 13 07:15:17.284563 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:15:17.284631 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 07:15:17.285728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:17.301953 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 07:15:17.301977 kernel: hv_vmbus: registering driver hv_utils Aug 13 07:15:17.301991 kernel: scsi host1: storvsc_host_t Aug 13 07:15:17.302170 kernel: scsi host0: storvsc_host_t Aug 13 07:15:17.302304 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 07:15:17.302330 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 07:15:17.310900 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 07:15:17.310955 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 07:15:17.310973 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 07:15:17.310240 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:15:17.512307 systemd-resolved[205]: Clock change detected. Flushing caches. Aug 13 07:15:17.536656 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 07:15:17.543607 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 07:15:17.550383 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 07:15:17.555179 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 07:15:17.555536 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 07:15:17.562741 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:17.572772 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 07:15:17.585665 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 07:15:17.585928 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 07:15:17.587707 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 07:15:17.590689 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 07:15:17.590896 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 07:15:17.601223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:17.601282 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 07:15:17.636277 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: VF slot 1 added Aug 13 07:15:17.646218 kernel: hv_vmbus: registering driver hv_pci Aug 13 07:15:17.646314 kernel: hv_pci 47e48a7e-3839-4f94-9f36-12fbcb3bcee7: PCI VMBus probing: Using version 0x10004 Aug 13 07:15:17.649681 kernel: hv_pci 47e48a7e-3839-4f94-9f36-12fbcb3bcee7: PCI host bridge to bus 3839:00 Aug 13 07:15:17.653426 kernel: pci_bus 3839:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 07:15:17.654278 kernel: pci_bus 3839:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 07:15:17.661606 kernel: pci 3839:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 07:15:17.667290 kernel: pci 3839:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:15:17.671367 kernel: pci 3839:00:02.0: enabling Extended Tags Aug 13 07:15:17.685383 kernel: pci 3839:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3839:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 07:15:17.693728 kernel: pci_bus 3839:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 07:15:17.694143 kernel: pci 3839:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 07:15:17.884624 kernel: mlx5_core 3839:00:02.0: enabling device (0000 -> 0002) Aug 13 07:15:17.889282 kernel: mlx5_core 3839:00:02.0: firmware version: 14.30.5000 Aug 13 07:15:18.108472 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: VF registering: eth1 Aug 13 07:15:18.108878 kernel: mlx5_core 3839:00:02.0 eth1: joined to eth0 Aug 13 07:15:18.114842 kernel: mlx5_core 3839:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 07:15:18.125285 kernel: mlx5_core 3839:00:02.0 enP14393s1: renamed from eth1 Aug 13 07:15:18.176221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 07:15:18.230282 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Aug 13 07:15:18.245717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:15:18.279828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 07:15:18.392402 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (442) Aug 13 07:15:18.406735 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 07:15:18.412953 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 07:15:18.425435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:15:18.442287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:18.453289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:18.461280 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:19.461275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 07:15:19.463196 disk-uuid[602]: The operation has completed successfully. Aug 13 07:15:19.558711 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:15:19.558843 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:15:19.592599 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:15:19.602315 sh[715]: Success Aug 13 07:15:19.625290 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:15:20.376701 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:15:20.388380 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:15:20.394011 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:15:20.412170 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:15:20.412273 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:20.415675 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:15:20.418301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:15:20.420647 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:15:21.352894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:15:21.357871 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:15:21.369565 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:15:21.375444 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:15:21.402563 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:21.402624 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:21.402645 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:15:21.456637 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:15:21.469489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:15:21.491375 systemd-networkd[889]: lo: Link UP Aug 13 07:15:21.491385 systemd-networkd[889]: lo: Gained carrier Aug 13 07:15:21.493751 systemd-networkd[889]: Enumeration completed Aug 13 07:15:21.493994 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:15:21.496579 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:21.496582 systemd-networkd[889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:15:21.498025 systemd[1]: Reached target network.target - Network. Aug 13 07:15:21.555280 kernel: mlx5_core 3839:00:02.0 enP14393s1: Link up Aug 13 07:15:21.560284 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:15:21.568493 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:15:21.573803 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:21.578193 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:15:21.587270 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: Data path switched to VF: enP14393s1 Aug 13 07:15:21.587980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:15:21.588179 systemd-networkd[889]: enP14393s1: Link UP Aug 13 07:15:21.588319 systemd-networkd[889]: eth0: Link UP Aug 13 07:15:21.588471 systemd-networkd[889]: eth0: Gained carrier Aug 13 07:15:21.588483 systemd-networkd[889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:21.596053 systemd-networkd[889]: enP14393s1: Gained carrier Aug 13 07:15:21.625322 systemd-networkd[889]: eth0: DHCPv4 address 10.200.4.46/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:15:23.180460 systemd-networkd[889]: eth0: Gained IPv6LL Aug 13 07:15:23.695602 ignition[900]: Ignition 2.19.0 Aug 13 07:15:23.695613 ignition[900]: Stage: fetch-offline Aug 13 07:15:23.695657 ignition[900]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.695668 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.695789 ignition[900]: parsed url from cmdline: "" Aug 13 07:15:23.695795 ignition[900]: no config URL provided Aug 13 07:15:23.695803 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:15:23.695815 ignition[900]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:15:23.695822 ignition[900]: failed to fetch config: resource requires networking Aug 13 07:15:23.719136 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:15:23.698209 ignition[900]: Ignition finished successfully Aug 13 07:15:23.736429 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:15:23.759338 ignition[909]: Ignition 2.19.0 Aug 13 07:15:23.759351 ignition[909]: Stage: fetch Aug 13 07:15:23.759576 ignition[909]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.759589 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.759805 ignition[909]: parsed url from cmdline: "" Aug 13 07:15:23.759809 ignition[909]: no config URL provided Aug 13 07:15:23.759817 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:15:23.760009 ignition[909]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:15:23.760427 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 07:15:23.887329 ignition[909]: GET result: OK Aug 13 07:15:23.887434 ignition[909]: config has been read from IMDS userdata Aug 13 07:15:23.887467 ignition[909]: parsing config with SHA512: 5ee818102767a928a411da12a993dff72c56e3ee54a35052a4422d12d00cc73dd664f73bd9f1ad028ca7ddfa66b548bbee1d288dbf618b279764cd13282874aa Aug 13 07:15:23.898360 unknown[909]: fetched base config from "system" Aug 13 07:15:23.898811 ignition[909]: fetch: fetch complete Aug 13 07:15:23.898373 unknown[909]: fetched base config from "system" Aug 13 07:15:23.898818 ignition[909]: fetch: fetch passed Aug 13 07:15:23.898379 unknown[909]: fetched user config from "azure" Aug 13 07:15:23.898857 ignition[909]: Ignition finished successfully Aug 13 07:15:23.902225 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:15:23.922619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:15:23.938317 ignition[915]: Ignition 2.19.0 Aug 13 07:15:23.938328 ignition[915]: Stage: kargs Aug 13 07:15:23.940554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:15:23.938560 ignition[915]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.938574 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.939471 ignition[915]: kargs: kargs passed Aug 13 07:15:23.939515 ignition[915]: Ignition finished successfully Aug 13 07:15:23.956608 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:15:23.973440 ignition[921]: Ignition 2.19.0 Aug 13 07:15:23.973451 ignition[921]: Stage: disks Aug 13 07:15:23.973677 ignition[921]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:23.975626 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:15:23.973692 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:23.979288 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:15:23.974602 ignition[921]: disks: disks passed Aug 13 07:15:23.988418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:15:23.974650 ignition[921]: Ignition finished successfully Aug 13 07:15:23.995450 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:15:24.004009 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:15:24.006568 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:15:24.018451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:15:24.058605 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 07:15:24.067039 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:15:24.084836 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:15:24.176272 kernel: EXT4-fs (sda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:15:24.176831 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:15:24.181136 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:15:24.249406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:15:24.267272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Aug 13 07:15:24.271275 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:24.271319 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:24.275723 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:15:24.282274 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:15:24.287364 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:15:24.292348 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:15:24.298052 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:15:24.298176 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:15:24.311151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:15:24.313537 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:15:24.327417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:15:25.629907 coreos-metadata[957]: Aug 13 07:15:25.629 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:15:25.637594 coreos-metadata[957]: Aug 13 07:15:25.637 INFO Fetch successful Aug 13 07:15:25.637594 coreos-metadata[957]: Aug 13 07:15:25.637 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:15:25.648778 coreos-metadata[957]: Aug 13 07:15:25.648 INFO Fetch successful Aug 13 07:15:25.653800 coreos-metadata[957]: Aug 13 07:15:25.653 INFO wrote hostname ci-4081.3.5-a-7346cb15f0 to /sysroot/etc/hostname Aug 13 07:15:25.659498 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:15:25.957684 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:15:26.056945 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:15:26.107400 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:15:26.198022 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:15:27.758525 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:15:27.772385 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:15:27.775926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:15:27.789437 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:15:27.794802 kernel: BTRFS info (device sda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:27.824371 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:15:27.832802 ignition[1059]: INFO : Ignition 2.19.0 Aug 13 07:15:27.832802 ignition[1059]: INFO : Stage: mount Aug 13 07:15:27.836564 ignition[1059]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:27.836564 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:27.836564 ignition[1059]: INFO : mount: mount passed Aug 13 07:15:27.836564 ignition[1059]: INFO : Ignition finished successfully Aug 13 07:15:27.835578 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:15:27.856369 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:15:27.865179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:15:27.934291 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1070) Aug 13 07:15:27.942860 kernel: BTRFS info (device sda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:15:27.942933 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:15:27.945212 kernel: BTRFS info (device sda6): using free space tree Aug 13 07:15:27.951280 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 07:15:27.953325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:15:27.983058 ignition[1087]: INFO : Ignition 2.19.0 Aug 13 07:15:27.985455 ignition[1087]: INFO : Stage: files Aug 13 07:15:27.985455 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:27.985455 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:27.985455 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:15:27.995608 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:15:27.995608 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:15:28.244247 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:15:28.255655 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:15:28.255655 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:15:28.249588 unknown[1087]: wrote ssh authorized keys file for user: core Aug 13 07:15:28.322970 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:15:28.330498 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 07:15:28.469299 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:15:28.693726 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 07:15:28.693726 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:28.708829 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 07:15:29.188460 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:15:30.272080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 07:15:30.272080 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:15:30.280956 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:15:30.300191 ignition[1087]: INFO : files: files passed Aug 13 07:15:30.300191 ignition[1087]: INFO : Ignition finished successfully Aug 13 07:15:30.282618 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:15:30.306311 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:15:30.338418 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:15:30.348244 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:15:30.352047 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:15:30.369038 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:15:30.369038 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:15:30.389134 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:15:30.381988 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:15:30.389282 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:15:30.411559 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:15:30.437490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:15:30.437609 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:15:30.443970 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:15:30.451434 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:15:30.458122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:15:30.481506 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:15:30.503136 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:15:30.517561 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:15:30.534802 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:15:30.534922 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:15:30.545086 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:30.548691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:15:30.554402 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:15:30.559564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:15:30.559651 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:15:30.565071 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:15:30.570022 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:15:30.570108 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:15:30.570980 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:15:30.571404 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:15:30.571813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:15:30.572280 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:15:30.572692 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:15:30.573083 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:15:30.573953 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:15:30.574366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:15:30.574430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:15:30.575226 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:30.575631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:15:30.576026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:15:30.601122 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:15:30.608606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:15:30.608699 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:15:30.613795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:15:30.613846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:15:30.619152 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:15:30.619213 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:15:30.624379 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:15:30.626969 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:15:30.689404 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:15:30.696357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:15:30.704286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:15:30.704359 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:15:30.709801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:15:30.709864 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:15:30.726713 ignition[1140]: INFO : Ignition 2.19.0 Aug 13 07:15:30.726713 ignition[1140]: INFO : Stage: umount Aug 13 07:15:30.726713 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:15:30.726713 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 07:15:30.741121 ignition[1140]: INFO : umount: umount passed Aug 13 07:15:30.741121 ignition[1140]: INFO : Ignition finished successfully Aug 13 07:15:30.728885 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:15:30.729054 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:15:30.733244 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:15:30.733307 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:15:30.741171 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:15:30.741228 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:15:30.745639 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:15:30.745686 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:15:30.750152 systemd[1]: Stopped target network.target - Network. Aug 13 07:15:30.756766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:15:30.756838 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:15:30.759656 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:15:30.764073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:15:30.770539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:15:30.774103 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:15:30.804889 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:15:30.807323 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:15:30.807379 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:15:30.811588 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:15:30.811638 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:15:30.821889 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:15:30.821964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:15:30.829173 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:15:30.829245 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:15:30.834334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:15:30.838877 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:15:30.844849 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:15:30.845432 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:15:30.845518 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:15:30.848758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:15:30.848858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:15:30.863317 systemd-networkd[889]: eth0: DHCPv6 lease lost Aug 13 07:15:30.867781 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:15:30.868123 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:15:30.874031 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:15:30.874203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:15:30.884010 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:15:30.884072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:15:30.896365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:15:30.900681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:15:30.900751 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:15:30.908799 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:15:30.908859 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:15:30.915960 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:15:30.916018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:15:30.920702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:15:30.920757 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:30.931029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:30.942623 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:15:30.942792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:30.948320 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:15:30.948397 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:15:30.953436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:15:30.953483 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:15:30.965345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:15:30.965412 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:15:30.972620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:15:30.972690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:15:30.992277 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: Data path switched from VF: enP14393s1 Aug 13 07:15:30.993426 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:15:30.993516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:15:31.006414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:15:31.009090 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:15:31.009159 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:31.014660 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 07:15:31.014710 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:15:31.022711 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:15:31.022768 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:15:31.040690 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:31.040761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:31.052189 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:15:31.052324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:15:31.061370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:15:31.061491 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:15:31.067073 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:15:31.086667 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:15:31.112056 systemd[1]: Switching root. Aug 13 07:15:31.200674 systemd-journald[176]: Journal stopped Aug 13 07:15:44.146689 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Aug 13 07:15:44.146740 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:15:44.146759 kernel: SELinux: policy capability open_perms=1 Aug 13 07:15:44.146773 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:15:44.146787 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:15:44.146802 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:15:44.146816 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:15:44.146833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:15:44.151899 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:15:44.151929 kernel: audit: type=1403 audit(1755069332.616:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:15:44.151950 systemd[1]: Successfully loaded SELinux policy in 88.268ms. Aug 13 07:15:44.151970 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.717ms. Aug 13 07:15:44.151990 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:15:44.152007 systemd[1]: Detected virtualization microsoft. Aug 13 07:15:44.152034 systemd[1]: Detected architecture x86-64. Aug 13 07:15:44.152052 systemd[1]: Detected first boot. Aug 13 07:15:44.152070 systemd[1]: Hostname set to . Aug 13 07:15:44.152089 systemd[1]: Initializing machine ID from random generator. Aug 13 07:15:44.152106 zram_generator::config[1182]: No configuration found. Aug 13 07:15:44.152127 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:15:44.152142 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:15:44.152157 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:15:44.152173 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:15:44.152190 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:15:44.152207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:15:44.152223 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:15:44.152243 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:15:44.152309 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:15:44.152330 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:15:44.152346 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:15:44.152364 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:15:44.152381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:15:44.152398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:15:44.152414 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:15:44.152435 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:15:44.152449 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:15:44.152464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:15:44.152478 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:15:44.152495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:15:44.152510 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:15:44.152530 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:15:44.152545 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:15:44.152563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:15:44.152578 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:15:44.152595 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:15:44.152611 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:15:44.152627 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:15:44.152643 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:15:44.152660 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:15:44.152680 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:15:44.152697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:15:44.152715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:15:44.152732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:15:44.152749 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:15:44.152772 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:15:44.152791 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:15:44.152809 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:44.152826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:15:44.152844 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:15:44.152862 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:15:44.152881 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:15:44.152897 systemd[1]: Reached target machines.target - Containers. Aug 13 07:15:44.152916 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:15:44.152932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:15:44.152949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:15:44.152965 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:15:44.152981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:15:44.152997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:15:44.153014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:15:44.153031 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:15:44.153047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:15:44.153067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:15:44.153084 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:15:44.153100 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:15:44.153116 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:15:44.153133 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:15:44.153149 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:15:44.153166 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:15:44.153182 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:15:44.153202 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:15:44.153221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:15:44.153238 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:15:44.153273 systemd[1]: Stopped verity-setup.service. Aug 13 07:15:44.153292 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:44.153340 systemd-journald[1264]: Collecting audit messages is disabled. Aug 13 07:15:44.153379 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:15:44.153397 systemd-journald[1264]: Journal started Aug 13 07:15:44.153430 systemd-journald[1264]: Runtime Journal (/run/log/journal/ddb45de922604fe0a40d29555f9842c2) is 8.0M, max 158.8M, 150.8M free. Aug 13 07:15:43.153520 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:15:43.597096 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 07:15:43.597511 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:15:44.161342 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:15:44.162519 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:15:44.165693 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:15:44.173536 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:15:44.176466 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:15:44.179576 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:15:44.183345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:15:44.187267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:15:44.188339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:15:44.191797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:15:44.192325 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:15:44.197362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:15:44.201027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:15:44.220134 kernel: loop: module loaded Aug 13 07:15:44.221994 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:15:44.222228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:15:44.231640 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:15:44.235382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:15:44.236030 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:15:44.240753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:15:44.253456 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:15:44.264487 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:15:44.266896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:44.286130 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:15:44.295383 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:15:44.298447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:15:44.305460 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:15:44.310708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:15:44.317436 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:15:44.331405 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:15:44.340321 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:15:44.343827 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:15:44.351557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:15:44.355246 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:15:44.358814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:15:44.391301 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 07:15:44.391434 systemd-journald[1264]: Time spent on flushing to /var/log/journal/ddb45de922604fe0a40d29555f9842c2 is 55.254ms for 949 entries. Aug 13 07:15:44.391434 systemd-journald[1264]: System Journal (/var/log/journal/ddb45de922604fe0a40d29555f9842c2) is 8.0M, max 2.6G, 2.6G free. Aug 13 07:15:44.644733 systemd-journald[1264]: Received client request to flush runtime journal. Aug 13 07:15:44.644834 kernel: ACPI: bus type drm_connector registered Aug 13 07:15:44.644880 kernel: fuse: init (API version 7.39) Aug 13 07:15:44.373047 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:15:44.389699 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:15:44.397451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:15:44.404320 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:15:44.409752 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:15:44.409967 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:15:44.413350 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:15:44.413535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:15:44.430403 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:15:44.439165 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:15:44.441345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:15:44.447535 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:15:44.461387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:15:44.468372 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:15:44.476129 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:15:44.647272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:15:44.681826 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:15:44.682570 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:15:44.744640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:15:44.752872 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:15:44.788282 kernel: loop1: detected capacity change from 0 to 140768 Aug 13 07:15:45.091576 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Aug 13 07:15:45.091608 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Aug 13 07:15:45.098498 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:15:45.107433 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:15:46.140287 kernel: loop2: detected capacity change from 0 to 31056 Aug 13 07:15:46.285283 kernel: loop3: detected capacity change from 0 to 142488 Aug 13 07:15:46.407604 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:15:46.420497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:15:46.441590 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Aug 13 07:15:46.441615 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Aug 13 07:15:46.448173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:15:46.479535 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 07:15:46.496281 kernel: loop5: detected capacity change from 0 to 140768 Aug 13 07:15:46.513281 kernel: loop6: detected capacity change from 0 to 31056 Aug 13 07:15:46.523277 kernel: loop7: detected capacity change from 0 to 142488 Aug 13 07:15:46.542572 (sd-merge)[1346]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 07:15:46.543199 (sd-merge)[1346]: Merged extensions into '/usr'. Aug 13 07:15:46.547771 systemd[1]: Reloading requested from client PID 1310 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:15:46.547949 systemd[1]: Reloading... Aug 13 07:15:46.633520 zram_generator::config[1371]: No configuration found. Aug 13 07:15:46.795875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:46.861751 systemd[1]: Reloading finished in 313 ms. Aug 13 07:15:46.894048 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:15:46.908445 systemd[1]: Starting ensure-sysext.service... Aug 13 07:15:46.915509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:15:46.938894 systemd[1]: Reloading requested from client PID 1430 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:15:46.939050 systemd[1]: Reloading... Aug 13 07:15:46.945527 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:15:46.946043 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:15:46.947500 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:15:46.947918 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Aug 13 07:15:46.948008 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Aug 13 07:15:47.011211 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:15:47.011232 systemd-tmpfiles[1431]: Skipping /boot Aug 13 07:15:47.019296 zram_generator::config[1459]: No configuration found. Aug 13 07:15:47.030172 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:15:47.030193 systemd-tmpfiles[1431]: Skipping /boot Aug 13 07:15:47.186208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:15:47.248219 systemd[1]: Reloading finished in 308 ms. Aug 13 07:15:47.270464 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:15:47.287548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:15:47.308557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:15:47.318364 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:15:47.330550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:15:47.334976 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:15:47.340606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:47.340882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:15:47.342499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:15:47.354570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:15:47.362203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:15:47.364910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:47.365078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:47.370162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:15:47.370371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:15:47.379837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:15:47.380019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:15:47.386408 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:15:47.386716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:15:47.394892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:15:47.404398 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Aug 13 07:15:47.406972 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:47.407335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:15:47.413550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:15:47.418508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:15:47.422839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:15:47.430188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:15:47.437895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:15:47.438086 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:15:47.444106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:15:47.446729 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:15:47.449442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:15:47.449639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:15:47.456010 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:15:47.456408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:15:47.460829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:15:47.461005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:15:47.464517 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:15:47.464681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:15:47.470101 systemd[1]: Finished ensure-sysext.service. Aug 13 07:15:47.478061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:15:47.478135 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:15:47.540030 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:15:47.612920 systemd-resolved[1529]: Positive Trust Anchors: Aug 13 07:15:47.612941 systemd-resolved[1529]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:15:47.612986 systemd-resolved[1529]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:15:47.698201 augenrules[1560]: No rules Aug 13 07:15:47.699801 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:15:47.899332 systemd-resolved[1529]: Using system hostname 'ci-4081.3.5-a-7346cb15f0'. Aug 13 07:15:47.901930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:15:47.907667 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:15:47.919499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:15:48.888428 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:15:48.897456 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:15:48.932666 systemd-udevd[1569]: Using default interface naming scheme 'v255'. Aug 13 07:15:50.071273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:15:50.087584 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:15:50.180422 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:15:50.267572 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Aug 13 07:15:50.310389 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:15:50.312622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:50.320998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:50.321229 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:50.333535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:50.352631 kernel: hv_vmbus: registering driver hv_balloon Aug 13 07:15:50.352728 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 07:15:50.371728 systemd-networkd[1578]: lo: Link UP Aug 13 07:15:50.371738 systemd-networkd[1578]: lo: Gained carrier Aug 13 07:15:50.374040 systemd-networkd[1578]: Enumeration completed Aug 13 07:15:50.374723 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:50.374736 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:15:50.375250 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:15:50.378081 systemd[1]: Reached target network.target - Network. Aug 13 07:15:50.386420 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 07:15:50.388589 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:15:50.395759 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 07:15:50.395825 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 07:15:50.401585 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:15:50.404983 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:15:50.413375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:15:50.413583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:15:50.425522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:15:50.460321 kernel: mlx5_core 3839:00:02.0 enP14393s1: Link up Aug 13 07:15:50.485956 kernel: hv_netvsc 7c1e522d-acbf-7c1e-522d-acbf7c1e522d eth0: Data path switched to VF: enP14393s1 Aug 13 07:15:50.491157 systemd-networkd[1578]: enP14393s1: Link UP Aug 13 07:15:50.491889 systemd-networkd[1578]: eth0: Link UP Aug 13 07:15:50.491912 systemd-networkd[1578]: eth0: Gained carrier Aug 13 07:15:50.491967 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:15:50.504616 systemd-networkd[1578]: enP14393s1: Gained carrier Aug 13 07:15:50.537432 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.4.46/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:15:50.588292 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1581) Aug 13 07:15:50.704679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 07:15:50.713483 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:15:50.844278 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 13 07:15:50.978999 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:15:51.113866 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:15:51.126482 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:15:51.295486 lvm[1659]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:15:51.366756 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:15:51.370769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:15:51.384555 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:15:51.394120 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:15:51.421317 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:15:51.788459 systemd-networkd[1578]: eth0: Gained IPv6LL Aug 13 07:15:51.791648 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:15:51.795864 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:15:53.263508 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:15:53.268623 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:15:53.487807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:16:05.109311 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:16:05.120725 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:16:05.134506 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:16:05.173194 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:16:05.176388 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:16:05.178996 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:16:05.181879 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:16:05.184953 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:16:05.187697 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:16:05.193389 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:16:05.196452 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:16:05.196509 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:16:05.198661 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:16:05.244867 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:16:05.250703 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:16:05.291122 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:16:05.295944 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:16:05.300195 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:16:05.303674 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:16:05.306831 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:05.306866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:16:05.314483 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 07:16:05.325338 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:16:05.341920 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:16:05.358431 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:16:05.366378 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:16:05.382705 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:16:05.386242 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:16:05.386321 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Aug 13 07:16:05.392660 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 07:16:05.398464 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 07:16:05.400426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:05.406015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:16:05.415456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:16:05.420388 (chronyd)[1673]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 07:16:05.421896 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:16:05.434329 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:16:05.449076 KVP[1681]: KVP starting; pid is:1681 Aug 13 07:16:05.449548 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:16:05.465465 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:16:05.474575 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:16:05.475222 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:16:05.475390 jq[1677]: false Aug 13 07:16:05.480462 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:16:05.500479 kernel: hv_utils: KVP IC version 4.0 Aug 13 07:16:05.495939 KVP[1681]: KVP LIC Version: 3.1 Aug 13 07:16:05.494321 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:16:05.508462 chronyd[1693]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 07:16:05.515143 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:16:05.515608 extend-filesystems[1678]: Found loop4 Aug 13 07:16:05.515608 extend-filesystems[1678]: Found loop5 Aug 13 07:16:05.515608 extend-filesystems[1678]: Found loop6 Aug 13 07:16:05.515608 extend-filesystems[1678]: Found loop7 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda1 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda2 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda3 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found usr Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda4 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda6 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda7 Aug 13 07:16:05.527919 extend-filesystems[1678]: Found sda9 Aug 13 07:16:05.527919 extend-filesystems[1678]: Checking size of /dev/sda9 Aug 13 07:16:05.515726 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:16:05.552698 update_engine[1690]: I20250813 07:16:05.550511 1690 main.cc:92] Flatcar Update Engine starting Aug 13 07:16:05.561783 jq[1692]: true Aug 13 07:16:05.568975 chronyd[1693]: Timezone right/UTC failed leap second check, ignoring Aug 13 07:16:05.569342 chronyd[1693]: Loaded seccomp filter (level 2) Aug 13 07:16:05.571159 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 07:16:05.579715 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:16:05.579967 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:16:05.596868 extend-filesystems[1678]: Old size kept for /dev/sda9 Aug 13 07:16:05.610927 extend-filesystems[1678]: Found sr0 Aug 13 07:16:05.608709 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:16:05.627768 jq[1712]: true Aug 13 07:16:05.609851 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:16:05.633673 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:16:05.649211 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:16:05.649470 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:16:05.658590 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:16:05.684377 tar[1701]: linux-amd64/LICENSE Aug 13 07:16:05.684377 tar[1701]: linux-amd64/helm Aug 13 07:16:05.721370 systemd-logind[1689]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:16:05.727054 systemd-logind[1689]: New seat seat0. Aug 13 07:16:05.734862 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1738) Aug 13 07:16:05.729328 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:16:05.813049 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:16:05.808777 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:16:05.820237 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:16:06.073585 sshd_keygen[1734]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:16:06.082101 dbus-daemon[1676]: [system] SELinux support is enabled Aug 13 07:16:06.083513 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:16:06.105331 update_engine[1690]: I20250813 07:16:06.105098 1690 update_check_scheduler.cc:74] Next update check in 6m43s Aug 13 07:16:06.144288 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:16:06.144364 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:16:06.148554 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:16:06.148578 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:16:06.154874 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:16:06.159623 dbus-daemon[1676]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:16:06.166471 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:16:06.196334 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:16:06.209136 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:16:06.215452 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 07:16:06.221904 coreos-metadata[1675]: Aug 13 07:16:06.220 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 07:16:06.227802 coreos-metadata[1675]: Aug 13 07:16:06.227 INFO Fetch successful Aug 13 07:16:06.227802 coreos-metadata[1675]: Aug 13 07:16:06.227 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 07:16:06.233183 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:16:06.233437 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:16:06.239345 coreos-metadata[1675]: Aug 13 07:16:06.234 INFO Fetch successful Aug 13 07:16:06.239345 coreos-metadata[1675]: Aug 13 07:16:06.234 INFO Fetching http://168.63.129.16/machine/0f8063bc-20ef-4a64-a026-a5af4c8c00fd/a7cdb161%2D8e4a%2D4ba5%2Db328%2Dfdbb89c8f964.%5Fci%2D4081.3.5%2Da%2D7346cb15f0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 07:16:06.240815 coreos-metadata[1675]: Aug 13 07:16:06.240 INFO Fetch successful Aug 13 07:16:06.242406 coreos-metadata[1675]: Aug 13 07:16:06.241 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 07:16:06.254313 coreos-metadata[1675]: Aug 13 07:16:06.252 INFO Fetch successful Aug 13 07:16:06.260579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:16:06.321405 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 07:16:06.339413 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:16:06.348773 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:16:06.365191 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:16:06.370539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:16:06.378420 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:16:06.381617 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:16:06.395580 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:16:06.630838 tar[1701]: linux-amd64/README.md Aug 13 07:16:06.646135 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:16:06.979104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:06.984669 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:07.263160 containerd[1720]: time="2025-08-13T07:16:07.262605600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:16:07.299937 containerd[1720]: time="2025-08-13T07:16:07.299668100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302226600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302279500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302302200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302492200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302515000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302593400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:07.303139 containerd[1720]: time="2025-08-13T07:16:07.302614100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.303756200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.303804800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.303834500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.303851100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.303988500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.304276900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.304458100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.304485600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.304600800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:16:07.304789 containerd[1720]: time="2025-08-13T07:16:07.304663300Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:16:07.319392 containerd[1720]: time="2025-08-13T07:16:07.319351600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:16:07.319567 containerd[1720]: time="2025-08-13T07:16:07.319551900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:16:07.319681 containerd[1720]: time="2025-08-13T07:16:07.319669600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:16:07.319730 containerd[1720]: time="2025-08-13T07:16:07.319721900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:16:07.319791 containerd[1720]: time="2025-08-13T07:16:07.319777700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:16:07.320011 containerd[1720]: time="2025-08-13T07:16:07.319991700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:16:07.320502 containerd[1720]: time="2025-08-13T07:16:07.320471900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:16:07.320695 containerd[1720]: time="2025-08-13T07:16:07.320677100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:16:07.320791 containerd[1720]: time="2025-08-13T07:16:07.320776000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:16:07.320871 containerd[1720]: time="2025-08-13T07:16:07.320859000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:16:07.320922 containerd[1720]: time="2025-08-13T07:16:07.320913100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.320968 containerd[1720]: time="2025-08-13T07:16:07.320959700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321010 containerd[1720]: time="2025-08-13T07:16:07.321002100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321045800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321061400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321073800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321084900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321098000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321123900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321143100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321160600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321178100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321194800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321214600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321230800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321264700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.321515 containerd[1720]: time="2025-08-13T07:16:07.321286400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321317600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321336300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321353500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321371100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321391700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321426600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321445500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.322053 containerd[1720]: time="2025-08-13T07:16:07.321461300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322314800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322400600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322414500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322427100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322437700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322456500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322466800Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:16:07.323537 containerd[1720]: time="2025-08-13T07:16:07.322475300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:16:07.323782 containerd[1720]: time="2025-08-13T07:16:07.322779400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:16:07.323782 containerd[1720]: time="2025-08-13T07:16:07.322869100Z" level=info msg="Connect containerd service" Aug 13 07:16:07.323782 containerd[1720]: time="2025-08-13T07:16:07.322922400Z" level=info msg="using legacy CRI server" Aug 13 07:16:07.323782 containerd[1720]: time="2025-08-13T07:16:07.322933300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:16:07.323782 containerd[1720]: time="2025-08-13T07:16:07.323070100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:16:07.324390 containerd[1720]: time="2025-08-13T07:16:07.324362300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:16:07.324610 containerd[1720]: time="2025-08-13T07:16:07.324579900Z" level=info msg="Start subscribing containerd event" Aug 13 07:16:07.324695 containerd[1720]: time="2025-08-13T07:16:07.324682900Z" level=info msg="Start recovering state" Aug 13 07:16:07.325088 containerd[1720]: time="2025-08-13T07:16:07.325070200Z" level=info msg="Start event monitor" Aug 13 07:16:07.325172 containerd[1720]: time="2025-08-13T07:16:07.325158800Z" level=info msg="Start snapshots syncer" Aug 13 07:16:07.325452 containerd[1720]: time="2025-08-13T07:16:07.325224200Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:16:07.325452 containerd[1720]: time="2025-08-13T07:16:07.325239200Z" level=info msg="Start streaming server" Aug 13 07:16:07.325452 containerd[1720]: time="2025-08-13T07:16:07.324987000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:16:07.325452 containerd[1720]: time="2025-08-13T07:16:07.325418000Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:16:07.327043 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:16:07.330495 containerd[1720]: time="2025-08-13T07:16:07.329963500Z" level=info msg="containerd successfully booted in 0.068761s" Aug 13 07:16:07.331153 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:16:07.336608 systemd[1]: Startup finished in 646ms (firmware) + 45.087s (loader) + 1.147s (kernel) + 16.602s (initrd) + 34.804s (userspace) = 1min 38.287s. Aug 13 07:16:07.648379 kubelet[1836]: E0813 07:16:07.648326 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:07.651136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:07.651426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:07.651850 systemd[1]: kubelet.service: Consumed 1.026s CPU time. Aug 13 07:16:08.601016 login[1822]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 13 07:16:08.667671 login[1821]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:16:08.678447 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:16:08.689640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:16:08.693154 systemd-logind[1689]: New session 1 of user core. Aug 13 07:16:08.734443 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:16:08.740596 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:16:08.776779 (systemd)[1857]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:16:09.208369 systemd[1857]: Queued start job for default target default.target. Aug 13 07:16:09.219511 systemd[1857]: Created slice app.slice - User Application Slice. Aug 13 07:16:09.219548 systemd[1857]: Reached target paths.target - Paths. Aug 13 07:16:09.219567 systemd[1857]: Reached target timers.target - Timers. Aug 13 07:16:09.220831 systemd[1857]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:16:09.232678 systemd[1857]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:16:09.232809 systemd[1857]: Reached target sockets.target - Sockets. Aug 13 07:16:09.232828 systemd[1857]: Reached target basic.target - Basic System. Aug 13 07:16:09.232869 systemd[1857]: Reached target default.target - Main User Target. Aug 13 07:16:09.232904 systemd[1857]: Startup finished in 449ms. Aug 13 07:16:09.233165 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:16:09.243469 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:16:09.603431 login[1822]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:16:09.613313 systemd-logind[1689]: New session 2 of user core. Aug 13 07:16:09.617436 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:16:09.641188 waagent[1818]: 2025-08-13T07:16:09.641032Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 13 07:16:09.647027 waagent[1818]: 2025-08-13T07:16:09.644827Z INFO Daemon Daemon OS: flatcar 4081.3.5 Aug 13 07:16:09.650315 waagent[1818]: 2025-08-13T07:16:09.650146Z INFO Daemon Daemon Python: 3.11.9 Aug 13 07:16:09.656354 waagent[1818]: 2025-08-13T07:16:09.654628Z INFO Daemon Daemon Run daemon Aug 13 07:16:09.656354 waagent[1818]: 2025-08-13T07:16:09.655728Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Aug 13 07:16:09.656810 waagent[1818]: 2025-08-13T07:16:09.656745Z INFO Daemon Daemon Using waagent for provisioning Aug 13 07:16:09.657686 waagent[1818]: 2025-08-13T07:16:09.657647Z INFO Daemon Daemon Activate resource disk Aug 13 07:16:09.658387 waagent[1818]: 2025-08-13T07:16:09.658346Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 07:16:09.663059 waagent[1818]: 2025-08-13T07:16:09.663012Z INFO Daemon Daemon Found device: None Aug 13 07:16:09.663980 waagent[1818]: 2025-08-13T07:16:09.663937Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 07:16:09.664708 waagent[1818]: 2025-08-13T07:16:09.664669Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 07:16:09.667817 waagent[1818]: 2025-08-13T07:16:09.667773Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 07:16:09.669539 waagent[1818]: 2025-08-13T07:16:09.669496Z INFO Daemon Daemon Running default provisioning handler Aug 13 07:16:09.690152 waagent[1818]: 2025-08-13T07:16:09.690065Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 07:16:09.694929 waagent[1818]: 2025-08-13T07:16:09.694872Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 07:16:09.698821 waagent[1818]: 2025-08-13T07:16:09.698597Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 07:16:09.702589 waagent[1818]: 2025-08-13T07:16:09.702053Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 07:16:09.974337 waagent[1818]: 2025-08-13T07:16:09.974158Z INFO Daemon Daemon Successfully mounted dvd Aug 13 07:16:10.017795 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 07:16:10.020315 waagent[1818]: 2025-08-13T07:16:10.019653Z INFO Daemon Daemon Detect protocol endpoint Aug 13 07:16:10.022303 waagent[1818]: 2025-08-13T07:16:10.022149Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 07:16:10.033285 waagent[1818]: 2025-08-13T07:16:10.022447Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 07:16:10.033285 waagent[1818]: 2025-08-13T07:16:10.023712Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 07:16:10.033285 waagent[1818]: 2025-08-13T07:16:10.025100Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 07:16:10.033285 waagent[1818]: 2025-08-13T07:16:10.025816Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 07:16:10.081984 waagent[1818]: 2025-08-13T07:16:10.081912Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 07:16:10.093467 waagent[1818]: 2025-08-13T07:16:10.082486Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 07:16:10.093467 waagent[1818]: 2025-08-13T07:16:10.082601Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 07:16:10.274634 waagent[1818]: 2025-08-13T07:16:10.274476Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 07:16:10.282133 waagent[1818]: 2025-08-13T07:16:10.275036Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 07:16:10.283885 waagent[1818]: 2025-08-13T07:16:10.283818Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 07:16:10.319214 waagent[1818]: 2025-08-13T07:16:10.319140Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 07:16:10.349473 waagent[1818]: 2025-08-13T07:16:10.319923Z INFO Daemon Aug 13 07:16:10.349473 waagent[1818]: 2025-08-13T07:16:10.320036Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4355ac01-d1cf-4ffa-a522-7d115e97f2e2 eTag: 12791910216648848677 source: Fabric] Aug 13 07:16:10.349473 waagent[1818]: 2025-08-13T07:16:10.320329Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 07:16:10.349473 waagent[1818]: 2025-08-13T07:16:10.320815Z INFO Daemon Aug 13 07:16:10.349473 waagent[1818]: 2025-08-13T07:16:10.320896Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 07:16:10.349473 waagent[1818]: 2025-08-13T07:16:10.326550Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 07:16:10.487156 waagent[1818]: 2025-08-13T07:16:10.487071Z INFO Daemon Downloaded certificate {'thumbprint': 'CA23A8E92E1C07287958D5E4979F3EAE1B85AAB3', 'hasPrivateKey': True} Aug 13 07:16:10.492941 waagent[1818]: 2025-08-13T07:16:10.492875Z INFO Daemon Fetch goal state completed Aug 13 07:16:10.537010 waagent[1818]: 2025-08-13T07:16:10.536749Z INFO Daemon Daemon Starting provisioning Aug 13 07:16:10.542520 waagent[1818]: 2025-08-13T07:16:10.542440Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 07:16:10.554146 waagent[1818]: 2025-08-13T07:16:10.553203Z INFO Daemon Daemon Set hostname [ci-4081.3.5-a-7346cb15f0] Aug 13 07:16:10.648362 waagent[1818]: 2025-08-13T07:16:10.648277Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-a-7346cb15f0] Aug 13 07:16:10.664325 waagent[1818]: 2025-08-13T07:16:10.649066Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 07:16:10.664325 waagent[1818]: 2025-08-13T07:16:10.655510Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 07:16:10.702571 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:16:10.702581 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:16:10.702630 systemd-networkd[1578]: eth0: DHCP lease lost Aug 13 07:16:10.703969 waagent[1818]: 2025-08-13T07:16:10.703878Z INFO Daemon Daemon Create user account if not exists Aug 13 07:16:10.720509 waagent[1818]: 2025-08-13T07:16:10.704302Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 07:16:10.720509 waagent[1818]: 2025-08-13T07:16:10.705122Z INFO Daemon Daemon Configure sudoer Aug 13 07:16:10.720509 waagent[1818]: 2025-08-13T07:16:10.706253Z INFO Daemon Daemon Configure sshd Aug 13 07:16:10.720509 waagent[1818]: 2025-08-13T07:16:10.707020Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 07:16:10.720509 waagent[1818]: 2025-08-13T07:16:10.707762Z INFO Daemon Daemon Deploy ssh public key. Aug 13 07:16:10.720618 systemd-networkd[1578]: eth0: DHCPv6 lease lost Aug 13 07:16:10.756316 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.4.46/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 13 07:16:11.853018 waagent[1818]: 2025-08-13T07:16:11.852956Z INFO Daemon Daemon Provisioning complete Aug 13 07:16:11.867958 waagent[1818]: 2025-08-13T07:16:11.867883Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 07:16:11.875179 waagent[1818]: 2025-08-13T07:16:11.875083Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 07:16:11.880950 waagent[1818]: 2025-08-13T07:16:11.880864Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 13 07:16:12.030616 waagent[1907]: 2025-08-13T07:16:12.030509Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 13 07:16:12.031038 waagent[1907]: 2025-08-13T07:16:12.030677Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Aug 13 07:16:12.031038 waagent[1907]: 2025-08-13T07:16:12.030760Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 13 07:16:12.131874 waagent[1907]: 2025-08-13T07:16:12.131706Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 07:16:12.132061 waagent[1907]: 2025-08-13T07:16:12.132005Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:16:12.132157 waagent[1907]: 2025-08-13T07:16:12.132116Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:16:12.142004 waagent[1907]: 2025-08-13T07:16:12.141928Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 07:16:12.149015 waagent[1907]: 2025-08-13T07:16:12.148959Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 07:16:12.149494 waagent[1907]: 2025-08-13T07:16:12.149435Z INFO ExtHandler Aug 13 07:16:12.149589 waagent[1907]: 2025-08-13T07:16:12.149531Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e1e7bc44-44c5-4e97-9f17-8fac418d4564 eTag: 12791910216648848677 source: Fabric] Aug 13 07:16:12.150002 waagent[1907]: 2025-08-13T07:16:12.149947Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 07:16:12.150598 waagent[1907]: 2025-08-13T07:16:12.150540Z INFO ExtHandler Aug 13 07:16:12.150707 waagent[1907]: 2025-08-13T07:16:12.150627Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 07:16:12.154965 waagent[1907]: 2025-08-13T07:16:12.154896Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 07:16:12.220617 waagent[1907]: 2025-08-13T07:16:12.220529Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CA23A8E92E1C07287958D5E4979F3EAE1B85AAB3', 'hasPrivateKey': True} Aug 13 07:16:12.221116 waagent[1907]: 2025-08-13T07:16:12.221057Z INFO ExtHandler Fetch goal state completed Aug 13 07:16:12.234531 waagent[1907]: 2025-08-13T07:16:12.234469Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1907 Aug 13 07:16:12.234686 waagent[1907]: 2025-08-13T07:16:12.234638Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 07:16:12.236202 waagent[1907]: 2025-08-13T07:16:12.236142Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 07:16:12.236576 waagent[1907]: 2025-08-13T07:16:12.236526Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 07:16:12.258859 waagent[1907]: 2025-08-13T07:16:12.258816Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 07:16:12.259057 waagent[1907]: 2025-08-13T07:16:12.259012Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 07:16:12.265770 waagent[1907]: 2025-08-13T07:16:12.265396Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 07:16:12.271920 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit waagent.service)... Aug 13 07:16:12.271937 systemd[1]: Reloading... Aug 13 07:16:12.358956 zram_generator::config[1957]: No configuration found. Aug 13 07:16:12.478620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:12.559518 systemd[1]: Reloading finished in 287 ms. Aug 13 07:16:12.589281 waagent[1907]: 2025-08-13T07:16:12.586897Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 13 07:16:12.596086 systemd[1]: Reloading requested from client PID 2011 ('systemctl') (unit waagent.service)... Aug 13 07:16:12.596102 systemd[1]: Reloading... Aug 13 07:16:12.684290 zram_generator::config[2048]: No configuration found. Aug 13 07:16:12.803283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:16:12.885598 systemd[1]: Reloading finished in 289 ms. Aug 13 07:16:12.914302 waagent[1907]: 2025-08-13T07:16:12.913460Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 07:16:12.914302 waagent[1907]: 2025-08-13T07:16:12.913653Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 07:16:13.087524 waagent[1907]: 2025-08-13T07:16:13.087360Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 07:16:13.088170 waagent[1907]: 2025-08-13T07:16:13.088103Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 13 07:16:13.089002 waagent[1907]: 2025-08-13T07:16:13.088922Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 07:16:13.089581 waagent[1907]: 2025-08-13T07:16:13.089482Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:16:13.089581 waagent[1907]: 2025-08-13T07:16:13.089530Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 07:16:13.089878 waagent[1907]: 2025-08-13T07:16:13.089835Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 07:16:13.090167 waagent[1907]: 2025-08-13T07:16:13.090111Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 07:16:13.090560 waagent[1907]: 2025-08-13T07:16:13.090474Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:16:13.090622 waagent[1907]: 2025-08-13T07:16:13.090585Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 07:16:13.090864 waagent[1907]: 2025-08-13T07:16:13.090813Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 07:16:13.091062 waagent[1907]: 2025-08-13T07:16:13.091017Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 07:16:13.091062 waagent[1907]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 07:16:13.091062 waagent[1907]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 07:16:13.091062 waagent[1907]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 07:16:13.091062 waagent[1907]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:16:13.091062 waagent[1907]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:16:13.091062 waagent[1907]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 07:16:13.091655 waagent[1907]: 2025-08-13T07:16:13.091578Z INFO EnvHandler ExtHandler Configure routes Aug 13 07:16:13.091719 waagent[1907]: 2025-08-13T07:16:13.091685Z INFO EnvHandler ExtHandler Gateway:None Aug 13 07:16:13.091797 waagent[1907]: 2025-08-13T07:16:13.091760Z INFO EnvHandler ExtHandler Routes:None Aug 13 07:16:13.092121 waagent[1907]: 2025-08-13T07:16:13.092083Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 07:16:13.092429 waagent[1907]: 2025-08-13T07:16:13.092371Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 07:16:13.092492 waagent[1907]: 2025-08-13T07:16:13.092430Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 07:16:13.092713 waagent[1907]: 2025-08-13T07:16:13.092673Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 07:16:13.099410 waagent[1907]: 2025-08-13T07:16:13.099351Z INFO ExtHandler ExtHandler Aug 13 07:16:13.099509 waagent[1907]: 2025-08-13T07:16:13.099467Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7603dc94-c1db-4ed8-a66e-347e3481f9bc correlation 65ab1f2c-8371-4cd7-a63a-9b43864dec39 created: 2025-08-13T07:14:18.621185Z] Aug 13 07:16:13.099890 waagent[1907]: 2025-08-13T07:16:13.099840Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 07:16:13.100457 waagent[1907]: 2025-08-13T07:16:13.100410Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Aug 13 07:16:13.140094 waagent[1907]: 2025-08-13T07:16:13.139971Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: FDA5D35B-AC93-4C3C-8C8F-130DCDCFBED5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 13 07:16:13.200976 waagent[1907]: 2025-08-13T07:16:13.200895Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 07:16:13.200976 waagent[1907]: Executing ['ip', '-a', '-o', 'link']: Aug 13 07:16:13.200976 waagent[1907]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 07:16:13.200976 waagent[1907]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2d:ac:bf brd ff:ff:ff:ff:ff:ff Aug 13 07:16:13.200976 waagent[1907]: 3: enP14393s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2d:ac:bf brd ff:ff:ff:ff:ff:ff\ altname enP14393p0s2 Aug 13 07:16:13.200976 waagent[1907]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 07:16:13.200976 waagent[1907]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 07:16:13.200976 waagent[1907]: 2: eth0 inet 10.200.4.46/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 07:16:13.200976 waagent[1907]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 07:16:13.200976 waagent[1907]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 07:16:13.200976 waagent[1907]: 2: eth0 inet6 fe80::7e1e:52ff:fe2d:acbf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 07:16:13.311465 waagent[1907]: 2025-08-13T07:16:13.311383Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 13 07:16:13.311465 waagent[1907]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:16:13.311465 waagent[1907]: pkts bytes target prot opt in out source destination Aug 13 07:16:13.311465 waagent[1907]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:16:13.311465 waagent[1907]: pkts bytes target prot opt in out source destination Aug 13 07:16:13.311465 waagent[1907]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:16:13.311465 waagent[1907]: pkts bytes target prot opt in out source destination Aug 13 07:16:13.311465 waagent[1907]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 07:16:13.311465 waagent[1907]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 07:16:13.311465 waagent[1907]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 07:16:13.314804 waagent[1907]: 2025-08-13T07:16:13.314741Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 07:16:13.314804 waagent[1907]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:16:13.314804 waagent[1907]: pkts bytes target prot opt in out source destination Aug 13 07:16:13.314804 waagent[1907]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:16:13.314804 waagent[1907]: pkts bytes target prot opt in out source destination Aug 13 07:16:13.314804 waagent[1907]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 07:16:13.314804 waagent[1907]: pkts bytes target prot opt in out source destination Aug 13 07:16:13.314804 waagent[1907]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 07:16:13.314804 waagent[1907]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 07:16:13.314804 waagent[1907]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 07:16:13.315187 waagent[1907]: 2025-08-13T07:16:13.315048Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 07:16:17.775364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:16:17.781506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:17.907200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:17.920716 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:18.556306 kubelet[2143]: E0813 07:16:18.556236 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:18.559962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:18.560160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:28.775594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:16:28.780530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:29.168708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:29.173538 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:29.378925 chronyd[1693]: Selected source PHC0 Aug 13 07:16:29.561841 kubelet[2158]: E0813 07:16:29.561746 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:29.564299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:29.564499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:35.884474 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:16:35.892563 systemd[1]: Started sshd@0-10.200.4.46:22-10.200.16.10:51134.service - OpenSSH per-connection server daemon (10.200.16.10:51134). Aug 13 07:16:36.493717 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 51134 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:36.495180 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:36.499900 systemd-logind[1689]: New session 3 of user core. Aug 13 07:16:36.509428 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:16:37.020619 systemd[1]: Started sshd@1-10.200.4.46:22-10.200.16.10:51144.service - OpenSSH per-connection server daemon (10.200.16.10:51144). Aug 13 07:16:37.603735 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 51144 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:37.605394 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:37.611081 systemd-logind[1689]: New session 4 of user core. Aug 13 07:16:37.619437 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:16:38.030355 sshd[2171]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:38.035140 systemd[1]: sshd@1-10.200.4.46:22-10.200.16.10:51144.service: Deactivated successfully. Aug 13 07:16:38.037087 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:16:38.037816 systemd-logind[1689]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:16:38.038726 systemd-logind[1689]: Removed session 4. Aug 13 07:16:38.134840 systemd[1]: Started sshd@2-10.200.4.46:22-10.200.16.10:51150.service - OpenSSH per-connection server daemon (10.200.16.10:51150). Aug 13 07:16:38.495668 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 07:16:38.721732 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 51150 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:38.723525 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:38.728339 systemd-logind[1689]: New session 5 of user core. Aug 13 07:16:38.734424 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:16:39.143740 sshd[2178]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:39.146716 systemd[1]: sshd@2-10.200.4.46:22-10.200.16.10:51150.service: Deactivated successfully. Aug 13 07:16:39.148865 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:16:39.150498 systemd-logind[1689]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:16:39.151553 systemd-logind[1689]: Removed session 5. Aug 13 07:16:39.248931 systemd[1]: Started sshd@3-10.200.4.46:22-10.200.16.10:51154.service - OpenSSH per-connection server daemon (10.200.16.10:51154). Aug 13 07:16:39.736885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:16:39.742498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:39.840487 sshd[2185]: Accepted publickey for core from 10.200.16.10 port 51154 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:39.841968 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:39.846723 systemd-logind[1689]: New session 6 of user core. Aug 13 07:16:39.853471 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:16:40.098048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:40.102937 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:40.269342 sshd[2185]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:40.273336 systemd[1]: sshd@3-10.200.4.46:22-10.200.16.10:51154.service: Deactivated successfully. Aug 13 07:16:40.275156 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:16:40.276034 systemd-logind[1689]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:16:40.276936 systemd-logind[1689]: Removed session 6. Aug 13 07:16:40.380456 systemd[1]: Started sshd@4-10.200.4.46:22-10.200.16.10:38132.service - OpenSSH per-connection server daemon (10.200.16.10:38132). Aug 13 07:16:40.520835 kubelet[2196]: E0813 07:16:40.520777 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:40.523245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:40.523464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:40.967498 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 38132 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:40.969276 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:40.974079 systemd-logind[1689]: New session 7 of user core. Aug 13 07:16:40.981422 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:16:41.337900 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:16:41.338293 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:41.351701 sudo[2211]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:41.453852 sshd[2205]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:41.458583 systemd[1]: sshd@4-10.200.4.46:22-10.200.16.10:38132.service: Deactivated successfully. Aug 13 07:16:41.460910 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:16:41.461830 systemd-logind[1689]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:16:41.462859 systemd-logind[1689]: Removed session 7. Aug 13 07:16:41.558829 systemd[1]: Started sshd@5-10.200.4.46:22-10.200.16.10:38142.service - OpenSSH per-connection server daemon (10.200.16.10:38142). Aug 13 07:16:42.144820 sshd[2216]: Accepted publickey for core from 10.200.16.10 port 38142 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:42.146402 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:42.151004 systemd-logind[1689]: New session 8 of user core. Aug 13 07:16:42.159408 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:16:42.472121 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:16:42.472784 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:42.476173 sudo[2220]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:42.481251 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:16:42.481654 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:42.493564 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:42.497429 auditctl[2223]: No rules Aug 13 07:16:42.498609 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:16:42.498857 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:42.500765 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:16:42.526427 augenrules[2241]: No rules Aug 13 07:16:42.527837 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:16:42.529007 sudo[2219]: pam_unix(sudo:session): session closed for user root Aug 13 07:16:42.631153 sshd[2216]: pam_unix(sshd:session): session closed for user core Aug 13 07:16:42.635892 systemd[1]: sshd@5-10.200.4.46:22-10.200.16.10:38142.service: Deactivated successfully. Aug 13 07:16:42.638107 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:16:42.639094 systemd-logind[1689]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:16:42.640174 systemd-logind[1689]: Removed session 8. Aug 13 07:16:42.735019 systemd[1]: Started sshd@6-10.200.4.46:22-10.200.16.10:38150.service - OpenSSH per-connection server daemon (10.200.16.10:38150). Aug 13 07:16:43.321036 sshd[2249]: Accepted publickey for core from 10.200.16.10 port 38150 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:16:43.321664 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:16:43.326695 systemd-logind[1689]: New session 9 of user core. Aug 13 07:16:43.333403 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:16:43.646188 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:16:43.646570 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:16:44.109561 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:16:44.109711 (dockerd)[2267]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:16:44.601474 dockerd[2267]: time="2025-08-13T07:16:44.601408600Z" level=info msg="Starting up" Aug 13 07:16:44.808736 dockerd[2267]: time="2025-08-13T07:16:44.808688662Z" level=info msg="Loading containers: start." Aug 13 07:16:44.918283 kernel: Initializing XFRM netlink socket Aug 13 07:16:45.004072 systemd-networkd[1578]: docker0: Link UP Aug 13 07:16:45.033462 dockerd[2267]: time="2025-08-13T07:16:45.033418061Z" level=info msg="Loading containers: done." Aug 13 07:16:45.053869 dockerd[2267]: time="2025-08-13T07:16:45.053820705Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:16:45.054061 dockerd[2267]: time="2025-08-13T07:16:45.053929609Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:16:45.054111 dockerd[2267]: time="2025-08-13T07:16:45.054055814Z" level=info msg="Daemon has completed initialization" Aug 13 07:16:45.106862 dockerd[2267]: time="2025-08-13T07:16:45.106224317Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:16:45.106485 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:16:46.420091 containerd[1720]: time="2025-08-13T07:16:46.420049603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 07:16:47.188909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308735956.mount: Deactivated successfully. Aug 13 07:16:48.804725 containerd[1720]: time="2025-08-13T07:16:48.804675765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.807114 containerd[1720]: time="2025-08-13T07:16:48.807061771Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28800002" Aug 13 07:16:48.810181 containerd[1720]: time="2025-08-13T07:16:48.810125807Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.814365 containerd[1720]: time="2025-08-13T07:16:48.814314193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:48.816132 containerd[1720]: time="2025-08-13T07:16:48.815352139Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.395260135s" Aug 13 07:16:48.816132 containerd[1720]: time="2025-08-13T07:16:48.815394241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 07:16:48.816407 containerd[1720]: time="2025-08-13T07:16:48.816379685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 07:16:50.359098 containerd[1720]: time="2025-08-13T07:16:50.359040550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:50.361694 containerd[1720]: time="2025-08-13T07:16:50.361446240Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783644" Aug 13 07:16:50.364421 containerd[1720]: time="2025-08-13T07:16:50.364042837Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:50.368959 containerd[1720]: time="2025-08-13T07:16:50.368927920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:50.369936 containerd[1720]: time="2025-08-13T07:16:50.369902057Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.55348987s" Aug 13 07:16:50.370067 containerd[1720]: time="2025-08-13T07:16:50.370046362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 07:16:50.370988 containerd[1720]: time="2025-08-13T07:16:50.370963297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 07:16:50.527009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 07:16:50.540276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:16:50.645406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:16:50.650168 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:16:51.312639 kubelet[2468]: E0813 07:16:51.312581 2468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:16:51.315070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:16:51.315309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:16:51.791373 update_engine[1690]: I20250813 07:16:51.791297 1690 update_attempter.cc:509] Updating boot flags... Aug 13 07:16:51.887408 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2487) Aug 13 07:16:52.034358 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2480) Aug 13 07:16:52.188284 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2480) Aug 13 07:16:52.674820 containerd[1720]: time="2025-08-13T07:16:52.674763753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:52.679234 containerd[1720]: time="2025-08-13T07:16:52.679049613Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176929" Aug 13 07:16:52.685928 containerd[1720]: time="2025-08-13T07:16:52.685599859Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:52.691183 containerd[1720]: time="2025-08-13T07:16:52.691145867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:52.692201 containerd[1720]: time="2025-08-13T07:16:52.692167205Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 2.321173608s" Aug 13 07:16:52.692352 containerd[1720]: time="2025-08-13T07:16:52.692328711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 07:16:52.693214 containerd[1720]: time="2025-08-13T07:16:52.693185543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 07:16:54.013453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718180004.mount: Deactivated successfully. Aug 13 07:16:54.594235 containerd[1720]: time="2025-08-13T07:16:54.594168745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:54.597099 containerd[1720]: time="2025-08-13T07:16:54.596895036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895388" Aug 13 07:16:54.601252 containerd[1720]: time="2025-08-13T07:16:54.600505057Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:54.605377 containerd[1720]: time="2025-08-13T07:16:54.605313218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:54.606384 containerd[1720]: time="2025-08-13T07:16:54.605869436Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.912650192s" Aug 13 07:16:54.606384 containerd[1720]: time="2025-08-13T07:16:54.605907238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 07:16:54.606807 containerd[1720]: time="2025-08-13T07:16:54.606782067Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:16:55.257456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205354259.mount: Deactivated successfully. Aug 13 07:16:56.578682 containerd[1720]: time="2025-08-13T07:16:56.578624131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:56.580817 containerd[1720]: time="2025-08-13T07:16:56.580755602Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Aug 13 07:16:56.583189 containerd[1720]: time="2025-08-13T07:16:56.583138582Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:56.588222 containerd[1720]: time="2025-08-13T07:16:56.588170350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:56.589379 containerd[1720]: time="2025-08-13T07:16:56.589208185Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.982385316s" Aug 13 07:16:56.589379 containerd[1720]: time="2025-08-13T07:16:56.589247786Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:16:56.590475 containerd[1720]: time="2025-08-13T07:16:56.590449026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:16:57.151001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037286485.mount: Deactivated successfully. Aug 13 07:16:57.167451 containerd[1720]: time="2025-08-13T07:16:57.167407527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:57.170556 containerd[1720]: time="2025-08-13T07:16:57.170338625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 07:16:57.176350 containerd[1720]: time="2025-08-13T07:16:57.175238389Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:57.181100 containerd[1720]: time="2025-08-13T07:16:57.181040583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:16:57.181867 containerd[1720]: time="2025-08-13T07:16:57.181727606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.246879ms" Aug 13 07:16:57.181867 containerd[1720]: time="2025-08-13T07:16:57.181764207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:16:57.182592 containerd[1720]: time="2025-08-13T07:16:57.182423029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 07:16:57.868557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882869289.mount: Deactivated successfully. Aug 13 07:17:00.218378 containerd[1720]: time="2025-08-13T07:17:00.218323064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:00.221308 containerd[1720]: time="2025-08-13T07:17:00.221071365Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Aug 13 07:17:00.224442 containerd[1720]: time="2025-08-13T07:17:00.224401088Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:00.231393 containerd[1720]: time="2025-08-13T07:17:00.230972230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:00.232222 containerd[1720]: time="2025-08-13T07:17:00.232136573Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.049681942s" Aug 13 07:17:00.232370 containerd[1720]: time="2025-08-13T07:17:00.232227776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 07:17:01.525508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 07:17:01.535274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:01.683441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:01.694839 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:17:01.750278 kubelet[2719]: E0813 07:17:01.750199 2719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:17:01.752578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:17:01.752931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:17:02.850006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:02.857556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:02.885680 systemd[1]: Reloading requested from client PID 2733 ('systemctl') (unit session-9.scope)... Aug 13 07:17:02.885696 systemd[1]: Reloading... Aug 13 07:17:02.978282 zram_generator::config[2773]: No configuration found. Aug 13 07:17:03.122905 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:03.210829 systemd[1]: Reloading finished in 324 ms. Aug 13 07:17:03.290825 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:17:03.290941 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:17:03.291246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:03.298697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:04.303439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:04.303746 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:17:04.343335 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:04.343335 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:17:04.343335 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:04.343792 kubelet[2839]: I0813 07:17:04.343438 2839 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:17:04.700955 kubelet[2839]: I0813 07:17:04.700825 2839 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:17:04.700955 kubelet[2839]: I0813 07:17:04.700865 2839 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:17:04.703304 kubelet[2839]: I0813 07:17:04.701536 2839 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:17:05.201607 kubelet[2839]: E0813 07:17:05.201558 2839 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:05.202794 kubelet[2839]: I0813 07:17:05.202635 2839 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:17:05.210612 kubelet[2839]: E0813 07:17:05.210570 2839 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:17:05.212239 kubelet[2839]: I0813 07:17:05.210751 2839 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:17:05.215532 kubelet[2839]: I0813 07:17:05.214498 2839 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:17:05.215532 kubelet[2839]: I0813 07:17:05.214740 2839 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:17:05.215532 kubelet[2839]: I0813 07:17:05.214769 2839 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-7346cb15f0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:17:05.215532 kubelet[2839]: I0813 07:17:05.214925 2839 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:17:05.215791 kubelet[2839]: I0813 07:17:05.214934 2839 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:17:05.215791 kubelet[2839]: I0813 07:17:05.215533 2839 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:05.218954 kubelet[2839]: I0813 07:17:05.218933 2839 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:17:05.219040 kubelet[2839]: I0813 07:17:05.218962 2839 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:17:05.219040 kubelet[2839]: I0813 07:17:05.218989 2839 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:17:05.219040 kubelet[2839]: I0813 07:17:05.219001 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:17:05.223286 kubelet[2839]: W0813 07:17:05.222298 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:05.223286 kubelet[2839]: E0813 07:17:05.222375 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:05.223286 kubelet[2839]: I0813 07:17:05.222462 2839 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:17:05.223286 kubelet[2839]: I0813 07:17:05.222925 2839 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:17:05.225289 kubelet[2839]: W0813 07:17:05.225167 2839 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:17:05.237172 kubelet[2839]: I0813 07:17:05.236917 2839 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:17:05.237172 kubelet[2839]: I0813 07:17:05.236960 2839 server.go:1287] "Started kubelet" Aug 13 07:17:05.237762 kubelet[2839]: W0813 07:17:05.237501 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-7346cb15f0&limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:05.237762 kubelet[2839]: E0813 07:17:05.237559 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-7346cb15f0&limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:05.240568 kubelet[2839]: I0813 07:17:05.240290 2839 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:17:05.243283 kubelet[2839]: I0813 07:17:05.242118 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:17:05.243283 kubelet[2839]: I0813 07:17:05.242547 2839 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:17:05.243283 kubelet[2839]: I0813 07:17:05.242983 2839 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:17:05.248281 kubelet[2839]: I0813 07:17:05.246610 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:17:05.248281 kubelet[2839]: E0813 07:17:05.245341 2839 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.46:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-7346cb15f0.185b4257ef65830a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-7346cb15f0,UID:ci-4081.3.5-a-7346cb15f0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-7346cb15f0,},FirstTimestamp:2025-08-13 07:17:05.236935434 +0000 UTC m=+0.926038263,LastTimestamp:2025-08-13 07:17:05.236935434 +0000 UTC m=+0.926038263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-7346cb15f0,}" Aug 13 07:17:05.248721 kubelet[2839]: I0813 07:17:05.248703 2839 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:17:05.254307 kubelet[2839]: I0813 07:17:05.254244 2839 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:17:05.254449 kubelet[2839]: E0813 07:17:05.254427 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:05.255522 kubelet[2839]: W0813 07:17:05.255473 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:05.255606 kubelet[2839]: E0813 07:17:05.255536 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:05.255655 kubelet[2839]: E0813 07:17:05.255630 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-7346cb15f0?timeout=10s\": dial tcp 10.200.4.46:6443: connect: connection refused" interval="200ms" Aug 13 07:17:05.255694 kubelet[2839]: I0813 07:17:05.255679 2839 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:17:05.255802 kubelet[2839]: I0813 07:17:05.255784 2839 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:17:05.258954 kubelet[2839]: I0813 07:17:05.258936 2839 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:17:05.259061 kubelet[2839]: I0813 07:17:05.259050 2839 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:17:05.259203 kubelet[2839]: I0813 07:17:05.259184 2839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:17:05.265503 kubelet[2839]: E0813 07:17:05.265479 2839 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:17:05.282293 kubelet[2839]: I0813 07:17:05.281549 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:17:05.283444 kubelet[2839]: I0813 07:17:05.283424 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:17:05.283567 kubelet[2839]: I0813 07:17:05.283555 2839 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:17:05.283651 kubelet[2839]: I0813 07:17:05.283640 2839 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:17:05.283816 kubelet[2839]: I0813 07:17:05.283805 2839 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:17:05.283933 kubelet[2839]: E0813 07:17:05.283914 2839 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:17:05.288809 kubelet[2839]: W0813 07:17:05.288761 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:05.289689 kubelet[2839]: E0813 07:17:05.289660 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:05.355523 kubelet[2839]: E0813 07:17:05.355469 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:05.360829 kubelet[2839]: I0813 07:17:05.360797 2839 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:17:05.360829 kubelet[2839]: I0813 07:17:05.360820 2839 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:17:05.361006 kubelet[2839]: I0813 07:17:05.360845 2839 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:05.366887 kubelet[2839]: I0813 07:17:05.366861 2839 policy_none.go:49] "None policy: Start" Aug 13 07:17:05.366887 kubelet[2839]: I0813 07:17:05.366887 2839 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:17:05.367024 kubelet[2839]: I0813 07:17:05.366907 2839 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:17:05.377899 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:17:05.385132 kubelet[2839]: E0813 07:17:05.385085 2839 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:17:05.387738 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:17:05.392823 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:17:05.403007 kubelet[2839]: I0813 07:17:05.402984 2839 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:17:05.403656 kubelet[2839]: I0813 07:17:05.403638 2839 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:17:05.403771 kubelet[2839]: I0813 07:17:05.403737 2839 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:17:05.404705 kubelet[2839]: I0813 07:17:05.404046 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:17:05.406381 kubelet[2839]: E0813 07:17:05.406358 2839 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:17:05.406543 kubelet[2839]: E0813 07:17:05.406497 2839 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:05.456761 kubelet[2839]: E0813 07:17:05.456642 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-7346cb15f0?timeout=10s\": dial tcp 10.200.4.46:6443: connect: connection refused" interval="400ms" Aug 13 07:17:05.506834 kubelet[2839]: I0813 07:17:05.506763 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.507211 kubelet[2839]: E0813 07:17:05.507164 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.46:6443/api/v1/nodes\": dial tcp 10.200.4.46:6443: connect: connection refused" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.597142 systemd[1]: Created slice kubepods-burstable-poddaac24693259784f2758d71b6040a1cf.slice - libcontainer container kubepods-burstable-poddaac24693259784f2758d71b6040a1cf.slice. Aug 13 07:17:05.610978 kubelet[2839]: E0813 07:17:05.610946 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.613990 systemd[1]: Created slice kubepods-burstable-pod51f9a464dddddc8f25cc6763843eece1.slice - libcontainer container kubepods-burstable-pod51f9a464dddddc8f25cc6763843eece1.slice. Aug 13 07:17:05.616481 kubelet[2839]: E0813 07:17:05.616288 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.618248 systemd[1]: Created slice kubepods-burstable-podb178c8245b87ba2dcdca08ee71af7ced.slice - libcontainer container kubepods-burstable-podb178c8245b87ba2dcdca08ee71af7ced.slice. Aug 13 07:17:05.620041 kubelet[2839]: E0813 07:17:05.620017 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657477 kubelet[2839]: I0813 07:17:05.657394 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657477 kubelet[2839]: I0813 07:17:05.657452 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b178c8245b87ba2dcdca08ee71af7ced-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-7346cb15f0\" (UID: \"b178c8245b87ba2dcdca08ee71af7ced\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657477 kubelet[2839]: I0813 07:17:05.657483 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daac24693259784f2758d71b6040a1cf-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" (UID: \"daac24693259784f2758d71b6040a1cf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657852 kubelet[2839]: I0813 07:17:05.657557 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daac24693259784f2758d71b6040a1cf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" (UID: \"daac24693259784f2758d71b6040a1cf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657852 kubelet[2839]: I0813 07:17:05.657589 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daac24693259784f2758d71b6040a1cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" (UID: \"daac24693259784f2758d71b6040a1cf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657852 kubelet[2839]: I0813 07:17:05.657617 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657852 kubelet[2839]: I0813 07:17:05.657646 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.657852 kubelet[2839]: I0813 07:17:05.657673 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.658013 kubelet[2839]: I0813 07:17:05.657701 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.709705 kubelet[2839]: I0813 07:17:05.709574 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.710214 kubelet[2839]: E0813 07:17:05.710177 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.46:6443/api/v1/nodes\": dial tcp 10.200.4.46:6443: connect: connection refused" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:05.858111 kubelet[2839]: E0813 07:17:05.858069 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-7346cb15f0?timeout=10s\": dial tcp 10.200.4.46:6443: connect: connection refused" interval="800ms" Aug 13 07:17:05.912270 containerd[1720]: time="2025-08-13T07:17:05.912222963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-7346cb15f0,Uid:daac24693259784f2758d71b6040a1cf,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:05.917749 containerd[1720]: time="2025-08-13T07:17:05.917711761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-7346cb15f0,Uid:51f9a464dddddc8f25cc6763843eece1,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:05.921777 containerd[1720]: time="2025-08-13T07:17:05.921546499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-7346cb15f0,Uid:b178c8245b87ba2dcdca08ee71af7ced,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:06.112539 kubelet[2839]: I0813 07:17:06.112511 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:06.113200 kubelet[2839]: E0813 07:17:06.113167 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.46:6443/api/v1/nodes\": dial tcp 10.200.4.46:6443: connect: connection refused" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:06.297009 kubelet[2839]: W0813 07:17:06.296970 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:06.297145 kubelet[2839]: E0813 07:17:06.297051 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:06.352891 kubelet[2839]: W0813 07:17:06.352848 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:06.353043 kubelet[2839]: E0813 07:17:06.352898 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:06.523802 kubelet[2839]: W0813 07:17:06.523669 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-7346cb15f0&limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:06.523802 kubelet[2839]: E0813 07:17:06.523738 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-7346cb15f0&limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:06.550864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264965578.mount: Deactivated successfully. Aug 13 07:17:06.574269 containerd[1720]: time="2025-08-13T07:17:06.574209712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:06.576757 containerd[1720]: time="2025-08-13T07:17:06.576703202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 13 07:17:06.579495 containerd[1720]: time="2025-08-13T07:17:06.579457101Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:06.582509 containerd[1720]: time="2025-08-13T07:17:06.582476310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:06.585263 containerd[1720]: time="2025-08-13T07:17:06.585207309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:17:06.588518 containerd[1720]: time="2025-08-13T07:17:06.588481527Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:06.591457 containerd[1720]: time="2025-08-13T07:17:06.591414232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:17:06.595245 containerd[1720]: time="2025-08-13T07:17:06.595196269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:17:06.596547 containerd[1720]: time="2025-08-13T07:17:06.596015698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.690532ms" Aug 13 07:17:06.597540 containerd[1720]: time="2025-08-13T07:17:06.597508552Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 679.732389ms" Aug 13 07:17:06.600446 containerd[1720]: time="2025-08-13T07:17:06.600416557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 678.808255ms" Aug 13 07:17:06.658802 kubelet[2839]: E0813 07:17:06.658593 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-7346cb15f0?timeout=10s\": dial tcp 10.200.4.46:6443: connect: connection refused" interval="1.6s" Aug 13 07:17:06.658802 kubelet[2839]: W0813 07:17:06.658698 2839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.46:6443: connect: connection refused Aug 13 07:17:06.658802 kubelet[2839]: E0813 07:17:06.658765 2839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:06.687280 kubelet[2839]: E0813 07:17:06.685990 2839 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.46:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-7346cb15f0.185b4257ef65830a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-7346cb15f0,UID:ci-4081.3.5-a-7346cb15f0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-7346cb15f0,},FirstTimestamp:2025-08-13 07:17:05.236935434 +0000 UTC m=+0.926038263,LastTimestamp:2025-08-13 07:17:05.236935434 +0000 UTC m=+0.926038263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-7346cb15f0,}" Aug 13 07:17:06.870465 containerd[1720]: time="2025-08-13T07:17:06.868505015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:06.870465 containerd[1720]: time="2025-08-13T07:17:06.868559717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:06.870465 containerd[1720]: time="2025-08-13T07:17:06.868594718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.870465 containerd[1720]: time="2025-08-13T07:17:06.868681421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.887997 containerd[1720]: time="2025-08-13T07:17:06.887633204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:06.887997 containerd[1720]: time="2025-08-13T07:17:06.887760209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:06.887997 containerd[1720]: time="2025-08-13T07:17:06.887817111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.887997 containerd[1720]: time="2025-08-13T07:17:06.887911314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.890665 containerd[1720]: time="2025-08-13T07:17:06.890574110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:06.890665 containerd[1720]: time="2025-08-13T07:17:06.890627612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:06.890665 containerd[1720]: time="2025-08-13T07:17:06.890662913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.890977 containerd[1720]: time="2025-08-13T07:17:06.890775117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:06.922288 kubelet[2839]: I0813 07:17:06.922164 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:06.922575 systemd[1]: Started cri-containerd-b0fbac053d4abffd2f53a1c09331c7f0b0a466b03551ea6828e6da8b085f0819.scope - libcontainer container b0fbac053d4abffd2f53a1c09331c7f0b0a466b03551ea6828e6da8b085f0819. Aug 13 07:17:06.924598 kubelet[2839]: E0813 07:17:06.924561 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.46:6443/api/v1/nodes\": dial tcp 10.200.4.46:6443: connect: connection refused" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:06.933590 systemd[1]: Started cri-containerd-35dfc70d281d46f25d16eade06241efd56920e16e975a91b55ab226893a9c21e.scope - libcontainer container 35dfc70d281d46f25d16eade06241efd56920e16e975a91b55ab226893a9c21e. Aug 13 07:17:06.941238 systemd[1]: Started cri-containerd-6d1ae1e72c5fcee217b9414afc1ec2e41923fbaf6b07606130b1b10206318d42.scope - libcontainer container 6d1ae1e72c5fcee217b9414afc1ec2e41923fbaf6b07606130b1b10206318d42. Aug 13 07:17:06.994560 containerd[1720]: time="2025-08-13T07:17:06.994427652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-7346cb15f0,Uid:51f9a464dddddc8f25cc6763843eece1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0fbac053d4abffd2f53a1c09331c7f0b0a466b03551ea6828e6da8b085f0819\"" Aug 13 07:17:07.002792 containerd[1720]: time="2025-08-13T07:17:07.000844983Z" level=info msg="CreateContainer within sandbox \"b0fbac053d4abffd2f53a1c09331c7f0b0a466b03551ea6828e6da8b085f0819\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:17:07.034495 containerd[1720]: time="2025-08-13T07:17:07.034455794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-7346cb15f0,Uid:daac24693259784f2758d71b6040a1cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d1ae1e72c5fcee217b9414afc1ec2e41923fbaf6b07606130b1b10206318d42\"" Aug 13 07:17:07.037724 containerd[1720]: time="2025-08-13T07:17:07.037679610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-7346cb15f0,Uid:b178c8245b87ba2dcdca08ee71af7ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"35dfc70d281d46f25d16eade06241efd56920e16e975a91b55ab226893a9c21e\"" Aug 13 07:17:07.040004 containerd[1720]: time="2025-08-13T07:17:07.039980693Z" level=info msg="CreateContainer within sandbox \"6d1ae1e72c5fcee217b9414afc1ec2e41923fbaf6b07606130b1b10206318d42\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:17:07.042213 containerd[1720]: time="2025-08-13T07:17:07.042185372Z" level=info msg="CreateContainer within sandbox \"35dfc70d281d46f25d16eade06241efd56920e16e975a91b55ab226893a9c21e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:17:07.058284 containerd[1720]: time="2025-08-13T07:17:07.058237551Z" level=info msg="CreateContainer within sandbox \"b0fbac053d4abffd2f53a1c09331c7f0b0a466b03551ea6828e6da8b085f0819\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c36b17c979d2adf14dc6338e780d66d72a5c44f2ccff4ecc5ac0f57a6343c2e6\"" Aug 13 07:17:07.058789 containerd[1720]: time="2025-08-13T07:17:07.058756969Z" level=info msg="StartContainer for \"c36b17c979d2adf14dc6338e780d66d72a5c44f2ccff4ecc5ac0f57a6343c2e6\"" Aug 13 07:17:07.087412 systemd[1]: Started cri-containerd-c36b17c979d2adf14dc6338e780d66d72a5c44f2ccff4ecc5ac0f57a6343c2e6.scope - libcontainer container c36b17c979d2adf14dc6338e780d66d72a5c44f2ccff4ecc5ac0f57a6343c2e6. Aug 13 07:17:07.130337 containerd[1720]: time="2025-08-13T07:17:07.128794192Z" level=info msg="CreateContainer within sandbox \"35dfc70d281d46f25d16eade06241efd56920e16e975a91b55ab226893a9c21e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"321b2a5390624a632da754504458bbad1d79ace5544372c7c33b718622fdf456\"" Aug 13 07:17:07.136206 containerd[1720]: time="2025-08-13T07:17:07.134943714Z" level=info msg="StartContainer for \"321b2a5390624a632da754504458bbad1d79ace5544372c7c33b718622fdf456\"" Aug 13 07:17:07.140951 containerd[1720]: time="2025-08-13T07:17:07.140913929Z" level=info msg="CreateContainer within sandbox \"6d1ae1e72c5fcee217b9414afc1ec2e41923fbaf6b07606130b1b10206318d42\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2a3a8cbd6c1f4274cd6b7a19f64f2ccd20e0dd40b84bd01a5feb88ec31cb33b8\"" Aug 13 07:17:07.141146 containerd[1720]: time="2025-08-13T07:17:07.141121437Z" level=info msg="StartContainer for \"c36b17c979d2adf14dc6338e780d66d72a5c44f2ccff4ecc5ac0f57a6343c2e6\" returns successfully" Aug 13 07:17:07.141950 containerd[1720]: time="2025-08-13T07:17:07.141925866Z" level=info msg="StartContainer for \"2a3a8cbd6c1f4274cd6b7a19f64f2ccd20e0dd40b84bd01a5feb88ec31cb33b8\"" Aug 13 07:17:07.195434 systemd[1]: Started cri-containerd-321b2a5390624a632da754504458bbad1d79ace5544372c7c33b718622fdf456.scope - libcontainer container 321b2a5390624a632da754504458bbad1d79ace5544372c7c33b718622fdf456. Aug 13 07:17:07.204609 systemd[1]: Started cri-containerd-2a3a8cbd6c1f4274cd6b7a19f64f2ccd20e0dd40b84bd01a5feb88ec31cb33b8.scope - libcontainer container 2a3a8cbd6c1f4274cd6b7a19f64f2ccd20e0dd40b84bd01a5feb88ec31cb33b8. Aug 13 07:17:07.270845 kubelet[2839]: E0813 07:17:07.270737 2839 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.46:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:17:07.279091 containerd[1720]: time="2025-08-13T07:17:07.279037505Z" level=info msg="StartContainer for \"2a3a8cbd6c1f4274cd6b7a19f64f2ccd20e0dd40b84bd01a5feb88ec31cb33b8\" returns successfully" Aug 13 07:17:07.302786 kubelet[2839]: E0813 07:17:07.302755 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:07.310489 containerd[1720]: time="2025-08-13T07:17:07.310455437Z" level=info msg="StartContainer for \"321b2a5390624a632da754504458bbad1d79ace5544372c7c33b718622fdf456\" returns successfully" Aug 13 07:17:07.314617 kubelet[2839]: E0813 07:17:07.314564 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:08.320292 kubelet[2839]: E0813 07:17:08.320000 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:08.321369 kubelet[2839]: E0813 07:17:08.320210 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:08.528297 kubelet[2839]: I0813 07:17:08.528070 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.199132 kubelet[2839]: E0813 07:17:09.199093 2839 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.292769 kubelet[2839]: I0813 07:17:09.292730 2839 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.292769 kubelet[2839]: E0813 07:17:09.292770 2839 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-a-7346cb15f0\": node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:09.304745 kubelet[2839]: E0813 07:17:09.304715 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:09.319452 kubelet[2839]: E0813 07:17:09.319427 2839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.405807 kubelet[2839]: E0813 07:17:09.405750 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:09.506441 kubelet[2839]: E0813 07:17:09.506311 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:09.555761 kubelet[2839]: I0813 07:17:09.555716 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.597600 kubelet[2839]: E0813 07:17:09.597244 2839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.597600 kubelet[2839]: I0813 07:17:09.597306 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.600082 kubelet[2839]: E0813 07:17:09.599998 2839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.600082 kubelet[2839]: I0813 07:17:09.600062 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:09.602914 kubelet[2839]: E0813 07:17:09.602884 2839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-7346cb15f0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:10.223925 kubelet[2839]: I0813 07:17:10.223668 2839 apiserver.go:52] "Watching apiserver" Aug 13 07:17:10.255073 kubelet[2839]: I0813 07:17:10.255038 2839 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:17:10.320968 kubelet[2839]: I0813 07:17:10.320936 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:10.350210 kubelet[2839]: W0813 07:17:10.350117 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:17:10.793298 kubelet[2839]: I0813 07:17:10.792479 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:10.837704 kubelet[2839]: W0813 07:17:10.837077 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:17:13.326687 systemd[1]: Reloading requested from client PID 3108 ('systemctl') (unit session-9.scope)... Aug 13 07:17:13.326706 systemd[1]: Reloading... Aug 13 07:17:13.415286 zram_generator::config[3144]: No configuration found. Aug 13 07:17:13.556969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:17:13.654717 systemd[1]: Reloading finished in 327 ms. Aug 13 07:17:13.696787 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:13.710696 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:17:13.710884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:13.719908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:17:14.135155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:17:14.143181 (kubelet)[3215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:17:14.187906 kubelet[3215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:14.187906 kubelet[3215]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 07:17:14.187906 kubelet[3215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:17:14.188404 kubelet[3215]: I0813 07:17:14.187991 3215 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:17:14.196351 kubelet[3215]: I0813 07:17:14.196290 3215 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 07:17:14.196351 kubelet[3215]: I0813 07:17:14.196319 3215 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:17:14.197284 kubelet[3215]: I0813 07:17:14.196804 3215 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 07:17:14.198945 kubelet[3215]: I0813 07:17:14.198927 3215 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:17:14.201152 kubelet[3215]: I0813 07:17:14.200972 3215 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:17:14.204903 kubelet[3215]: E0813 07:17:14.204586 3215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:17:14.204903 kubelet[3215]: I0813 07:17:14.204616 3215 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:17:14.207851 kubelet[3215]: I0813 07:17:14.207822 3215 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:17:14.208056 kubelet[3215]: I0813 07:17:14.208014 3215 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:17:14.208227 kubelet[3215]: I0813 07:17:14.208054 3215 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-7346cb15f0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:17:14.208383 kubelet[3215]: I0813 07:17:14.208234 3215 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:17:14.208383 kubelet[3215]: I0813 07:17:14.208249 3215 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 07:17:14.208383 kubelet[3215]: I0813 07:17:14.208325 3215 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:14.208509 kubelet[3215]: I0813 07:17:14.208473 3215 kubelet.go:446] "Attempting to sync node with API server" Aug 13 07:17:14.208509 kubelet[3215]: I0813 07:17:14.208498 3215 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:17:14.208583 kubelet[3215]: I0813 07:17:14.208519 3215 kubelet.go:352] "Adding apiserver pod source" Aug 13 07:17:14.208583 kubelet[3215]: I0813 07:17:14.208532 3215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:17:14.213302 kubelet[3215]: I0813 07:17:14.212714 3215 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:17:14.213302 kubelet[3215]: I0813 07:17:14.213190 3215 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:17:14.214215 kubelet[3215]: I0813 07:17:14.214189 3215 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 07:17:14.214310 kubelet[3215]: I0813 07:17:14.214228 3215 server.go:1287] "Started kubelet" Aug 13 07:17:14.227380 kubelet[3215]: I0813 07:17:14.227351 3215 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:17:14.228996 kubelet[3215]: I0813 07:17:14.228979 3215 server.go:479] "Adding debug handlers to kubelet server" Aug 13 07:17:14.231673 kubelet[3215]: I0813 07:17:14.231627 3215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:17:14.232003 kubelet[3215]: I0813 07:17:14.231988 3215 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:17:14.232120 kubelet[3215]: I0813 07:17:14.232089 3215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:17:14.240225 kubelet[3215]: I0813 07:17:14.240074 3215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:17:14.242441 kubelet[3215]: I0813 07:17:14.242417 3215 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 07:17:14.242996 kubelet[3215]: E0813 07:17:14.242701 3215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-7346cb15f0\" not found" Aug 13 07:17:14.248074 kubelet[3215]: I0813 07:17:14.245296 3215 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 07:17:14.248074 kubelet[3215]: I0813 07:17:14.245476 3215 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:17:14.250936 kubelet[3215]: I0813 07:17:14.249404 3215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:17:14.250936 kubelet[3215]: I0813 07:17:14.250810 3215 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:17:14.250936 kubelet[3215]: I0813 07:17:14.250909 3215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:17:14.252376 kubelet[3215]: I0813 07:17:14.252353 3215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:17:14.252467 kubelet[3215]: I0813 07:17:14.252383 3215 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 07:17:14.252467 kubelet[3215]: I0813 07:17:14.252403 3215 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 07:17:14.252467 kubelet[3215]: I0813 07:17:14.252411 3215 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 07:17:14.252589 kubelet[3215]: E0813 07:17:14.252459 3215 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:17:14.256331 kubelet[3215]: E0813 07:17:14.256307 3215 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:17:14.256916 kubelet[3215]: I0813 07:17:14.256831 3215 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:17:14.308933 kubelet[3215]: I0813 07:17:14.308905 3215 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 07:17:14.308933 kubelet[3215]: I0813 07:17:14.308921 3215 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 07:17:14.308933 kubelet[3215]: I0813 07:17:14.308943 3215 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:17:14.309235 kubelet[3215]: I0813 07:17:14.309122 3215 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:17:14.309235 kubelet[3215]: I0813 07:17:14.309136 3215 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:17:14.309235 kubelet[3215]: I0813 07:17:14.309161 3215 policy_none.go:49] "None policy: Start" Aug 13 07:17:14.309235 kubelet[3215]: I0813 07:17:14.309175 3215 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 07:17:14.309235 kubelet[3215]: I0813 07:17:14.309187 3215 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:17:14.309442 kubelet[3215]: I0813 07:17:14.309350 3215 state_mem.go:75] "Updated machine memory state" Aug 13 07:17:14.313086 kubelet[3215]: I0813 07:17:14.313068 3215 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:17:14.314039 kubelet[3215]: I0813 07:17:14.313608 3215 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:17:14.314039 kubelet[3215]: I0813 07:17:14.313622 3215 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:17:14.314039 kubelet[3215]: I0813 07:17:14.313859 3215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:17:14.315033 kubelet[3215]: E0813 07:17:14.315010 3215 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 07:17:14.353403 kubelet[3215]: I0813 07:17:14.353366 3215 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.353799 kubelet[3215]: I0813 07:17:14.353366 3215 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.353799 kubelet[3215]: I0813 07:17:14.353564 3215 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.369718 kubelet[3215]: W0813 07:17:14.369676 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:17:14.370497 kubelet[3215]: W0813 07:17:14.370472 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:17:14.370616 kubelet[3215]: E0813 07:17:14.370542 3215 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.370616 kubelet[3215]: W0813 07:17:14.370615 3215 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:17:14.370702 kubelet[3215]: E0813 07:17:14.370641 3215 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-a-7346cb15f0\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.417285 kubelet[3215]: I0813 07:17:14.416745 3215 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.435035 kubelet[3215]: I0813 07:17:14.434652 3215 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.435035 kubelet[3215]: I0813 07:17:14.434760 3215 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.546119 kubelet[3215]: I0813 07:17:14.546082 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/daac24693259784f2758d71b6040a1cf-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" (UID: \"daac24693259784f2758d71b6040a1cf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.546960 kubelet[3215]: I0813 07:17:14.546927 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/daac24693259784f2758d71b6040a1cf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" (UID: \"daac24693259784f2758d71b6040a1cf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547153 kubelet[3215]: I0813 07:17:14.546967 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/daac24693259784f2758d71b6040a1cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-7346cb15f0\" (UID: \"daac24693259784f2758d71b6040a1cf\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547153 kubelet[3215]: I0813 07:17:14.546995 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547153 kubelet[3215]: I0813 07:17:14.547017 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547153 kubelet[3215]: I0813 07:17:14.547042 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b178c8245b87ba2dcdca08ee71af7ced-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-7346cb15f0\" (UID: \"b178c8245b87ba2dcdca08ee71af7ced\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547153 kubelet[3215]: I0813 07:17:14.547062 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547403 kubelet[3215]: I0813 07:17:14.547084 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:14.547403 kubelet[3215]: I0813 07:17:14.547107 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51f9a464dddddc8f25cc6763843eece1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-7346cb15f0\" (UID: \"51f9a464dddddc8f25cc6763843eece1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:15.209931 kubelet[3215]: I0813 07:17:15.209886 3215 apiserver.go:52] "Watching apiserver" Aug 13 07:17:15.246371 kubelet[3215]: I0813 07:17:15.246324 3215 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 07:17:15.301857 kubelet[3215]: I0813 07:17:15.301602 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-7346cb15f0" podStartSLOduration=1.301584805 podStartE2EDuration="1.301584805s" podCreationTimestamp="2025-08-13 07:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:15.301526203 +0000 UTC m=+1.154380155" watchObservedRunningTime="2025-08-13 07:17:15.301584805 +0000 UTC m=+1.154438657" Aug 13 07:17:15.321972 kubelet[3215]: I0813 07:17:15.321660 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-a-7346cb15f0" podStartSLOduration=5.321641295 podStartE2EDuration="5.321641295s" podCreationTimestamp="2025-08-13 07:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:15.321244982 +0000 UTC m=+1.174098934" watchObservedRunningTime="2025-08-13 07:17:15.321641295 +0000 UTC m=+1.174495247" Aug 13 07:17:15.355819 kubelet[3215]: I0813 07:17:15.355662 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-a-7346cb15f0" podStartSLOduration=5.355641366 podStartE2EDuration="5.355641366s" podCreationTimestamp="2025-08-13 07:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:15.338707883 +0000 UTC m=+1.191561735" watchObservedRunningTime="2025-08-13 07:17:15.355641366 +0000 UTC m=+1.208495318" Aug 13 07:17:17.554015 kubelet[3215]: I0813 07:17:17.553962 3215 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:17:17.554708 containerd[1720]: time="2025-08-13T07:17:17.554461821Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:17:17.555103 kubelet[3215]: I0813 07:17:17.554809 3215 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:17:18.522564 systemd[1]: Created slice kubepods-besteffort-pod8b4b69f8_cb4c_4510_921a_84d39714ddd9.slice - libcontainer container kubepods-besteffort-pod8b4b69f8_cb4c_4510_921a_84d39714ddd9.slice. Aug 13 07:17:18.571104 kubelet[3215]: I0813 07:17:18.571057 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b4b69f8-cb4c-4510-921a-84d39714ddd9-kube-proxy\") pod \"kube-proxy-df7zh\" (UID: \"8b4b69f8-cb4c-4510-921a-84d39714ddd9\") " pod="kube-system/kube-proxy-df7zh" Aug 13 07:17:18.571571 kubelet[3215]: I0813 07:17:18.571110 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58bkw\" (UniqueName: \"kubernetes.io/projected/8b4b69f8-cb4c-4510-921a-84d39714ddd9-kube-api-access-58bkw\") pod \"kube-proxy-df7zh\" (UID: \"8b4b69f8-cb4c-4510-921a-84d39714ddd9\") " pod="kube-system/kube-proxy-df7zh" Aug 13 07:17:18.571571 kubelet[3215]: I0813 07:17:18.571142 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b4b69f8-cb4c-4510-921a-84d39714ddd9-xtables-lock\") pod \"kube-proxy-df7zh\" (UID: \"8b4b69f8-cb4c-4510-921a-84d39714ddd9\") " pod="kube-system/kube-proxy-df7zh" Aug 13 07:17:18.571571 kubelet[3215]: I0813 07:17:18.571162 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b4b69f8-cb4c-4510-921a-84d39714ddd9-lib-modules\") pod \"kube-proxy-df7zh\" (UID: \"8b4b69f8-cb4c-4510-921a-84d39714ddd9\") " pod="kube-system/kube-proxy-df7zh" Aug 13 07:17:18.646232 systemd[1]: Created slice kubepods-besteffort-pode4108d2a_b834_4b48_a2df_2319df7c53ff.slice - libcontainer container kubepods-besteffort-pode4108d2a_b834_4b48_a2df_2319df7c53ff.slice. Aug 13 07:17:18.672440 kubelet[3215]: I0813 07:17:18.671704 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4108d2a-b834-4b48-a2df-2319df7c53ff-var-lib-calico\") pod \"tigera-operator-747864d56d-jzfpq\" (UID: \"e4108d2a-b834-4b48-a2df-2319df7c53ff\") " pod="tigera-operator/tigera-operator-747864d56d-jzfpq" Aug 13 07:17:18.672440 kubelet[3215]: I0813 07:17:18.671751 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzdh\" (UniqueName: \"kubernetes.io/projected/e4108d2a-b834-4b48-a2df-2319df7c53ff-kube-api-access-tpzdh\") pod \"tigera-operator-747864d56d-jzfpq\" (UID: \"e4108d2a-b834-4b48-a2df-2319df7c53ff\") " pod="tigera-operator/tigera-operator-747864d56d-jzfpq" Aug 13 07:17:18.832975 containerd[1720]: time="2025-08-13T07:17:18.832934918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-df7zh,Uid:8b4b69f8-cb4c-4510-921a-84d39714ddd9,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:18.876207 containerd[1720]: time="2025-08-13T07:17:18.874901670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:18.876207 containerd[1720]: time="2025-08-13T07:17:18.874942771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:18.876207 containerd[1720]: time="2025-08-13T07:17:18.874957372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.876207 containerd[1720]: time="2025-08-13T07:17:18.875025274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:18.911401 systemd[1]: Started cri-containerd-fb3a34bad3d1b9e9850ad9e86eb03435a767e509462673caf5fa24b6b56a9cfd.scope - libcontainer container fb3a34bad3d1b9e9850ad9e86eb03435a767e509462673caf5fa24b6b56a9cfd. Aug 13 07:17:18.934163 containerd[1720]: time="2025-08-13T07:17:18.934130179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-df7zh,Uid:8b4b69f8-cb4c-4510-921a-84d39714ddd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb3a34bad3d1b9e9850ad9e86eb03435a767e509462673caf5fa24b6b56a9cfd\"" Aug 13 07:17:18.938542 containerd[1720]: time="2025-08-13T07:17:18.938503619Z" level=info msg="CreateContainer within sandbox \"fb3a34bad3d1b9e9850ad9e86eb03435a767e509462673caf5fa24b6b56a9cfd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:17:18.950491 containerd[1720]: time="2025-08-13T07:17:18.950459005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jzfpq,Uid:e4108d2a-b834-4b48-a2df-2319df7c53ff,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:17:19.002801 containerd[1720]: time="2025-08-13T07:17:19.002738989Z" level=info msg="CreateContainer within sandbox \"fb3a34bad3d1b9e9850ad9e86eb03435a767e509462673caf5fa24b6b56a9cfd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e6ba4a6a513a15e9828c52baf77a221da8ae4d04c9f44fedc41e70de4f445d6\"" Aug 13 07:17:19.004915 containerd[1720]: time="2025-08-13T07:17:19.004863058Z" level=info msg="StartContainer for \"7e6ba4a6a513a15e9828c52baf77a221da8ae4d04c9f44fedc41e70de4f445d6\"" Aug 13 07:17:19.027406 containerd[1720]: time="2025-08-13T07:17:19.027310681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:19.027686 containerd[1720]: time="2025-08-13T07:17:19.027609491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:19.027686 containerd[1720]: time="2025-08-13T07:17:19.027652092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:19.028330 containerd[1720]: time="2025-08-13T07:17:19.027738195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:19.049430 systemd[1]: Started cri-containerd-7e6ba4a6a513a15e9828c52baf77a221da8ae4d04c9f44fedc41e70de4f445d6.scope - libcontainer container 7e6ba4a6a513a15e9828c52baf77a221da8ae4d04c9f44fedc41e70de4f445d6. Aug 13 07:17:19.054336 systemd[1]: Started cri-containerd-f31a47a726cbb08d38beb3cfec262f5a15b4de83ee99b92578680a5612bafdb4.scope - libcontainer container f31a47a726cbb08d38beb3cfec262f5a15b4de83ee99b92578680a5612bafdb4. Aug 13 07:17:19.095242 containerd[1720]: time="2025-08-13T07:17:19.095124266Z" level=info msg="StartContainer for \"7e6ba4a6a513a15e9828c52baf77a221da8ae4d04c9f44fedc41e70de4f445d6\" returns successfully" Aug 13 07:17:19.116043 containerd[1720]: time="2025-08-13T07:17:19.115997339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jzfpq,Uid:e4108d2a-b834-4b48-a2df-2319df7c53ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f31a47a726cbb08d38beb3cfec262f5a15b4de83ee99b92578680a5612bafdb4\"" Aug 13 07:17:19.121647 containerd[1720]: time="2025-08-13T07:17:19.119935866Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:17:19.311441 kubelet[3215]: I0813 07:17:19.311374 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-df7zh" podStartSLOduration=1.311348834 podStartE2EDuration="1.311348834s" podCreationTimestamp="2025-08-13 07:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:17:19.311127627 +0000 UTC m=+5.163981479" watchObservedRunningTime="2025-08-13 07:17:19.311348834 +0000 UTC m=+5.164202686" Aug 13 07:17:19.693698 systemd[1]: run-containerd-runc-k8s.io-fb3a34bad3d1b9e9850ad9e86eb03435a767e509462673caf5fa24b6b56a9cfd-runc.wSFBml.mount: Deactivated successfully. Aug 13 07:17:20.542474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794835116.mount: Deactivated successfully. Aug 13 07:17:21.387746 containerd[1720]: time="2025-08-13T07:17:21.387701240Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.390321 containerd[1720]: time="2025-08-13T07:17:21.390128418Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:17:21.393408 containerd[1720]: time="2025-08-13T07:17:21.393335522Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.398213 containerd[1720]: time="2025-08-13T07:17:21.398156577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:21.399284 containerd[1720]: time="2025-08-13T07:17:21.398846999Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.278876132s" Aug 13 07:17:21.399284 containerd[1720]: time="2025-08-13T07:17:21.398884401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:17:21.402418 containerd[1720]: time="2025-08-13T07:17:21.402372613Z" level=info msg="CreateContainer within sandbox \"f31a47a726cbb08d38beb3cfec262f5a15b4de83ee99b92578680a5612bafdb4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:17:21.447078 containerd[1720]: time="2025-08-13T07:17:21.447029652Z" level=info msg="CreateContainer within sandbox \"f31a47a726cbb08d38beb3cfec262f5a15b4de83ee99b92578680a5612bafdb4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bedfa0446dc036bbe6a4ad226e634ab2ffa1d8aab37e37a6f48530396ff43b55\"" Aug 13 07:17:21.447779 containerd[1720]: time="2025-08-13T07:17:21.447735375Z" level=info msg="StartContainer for \"bedfa0446dc036bbe6a4ad226e634ab2ffa1d8aab37e37a6f48530396ff43b55\"" Aug 13 07:17:21.480208 systemd[1]: run-containerd-runc-k8s.io-bedfa0446dc036bbe6a4ad226e634ab2ffa1d8aab37e37a6f48530396ff43b55-runc.fIL9No.mount: Deactivated successfully. Aug 13 07:17:21.491425 systemd[1]: Started cri-containerd-bedfa0446dc036bbe6a4ad226e634ab2ffa1d8aab37e37a6f48530396ff43b55.scope - libcontainer container bedfa0446dc036bbe6a4ad226e634ab2ffa1d8aab37e37a6f48530396ff43b55. Aug 13 07:17:21.520625 containerd[1720]: time="2025-08-13T07:17:21.520467418Z" level=info msg="StartContainer for \"bedfa0446dc036bbe6a4ad226e634ab2ffa1d8aab37e37a6f48530396ff43b55\" returns successfully" Aug 13 07:17:26.235268 kubelet[3215]: I0813 07:17:26.234686 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jzfpq" podStartSLOduration=5.953761304 podStartE2EDuration="8.234664602s" podCreationTimestamp="2025-08-13 07:17:18 +0000 UTC" firstStartedPulling="2025-08-13 07:17:19.119104439 +0000 UTC m=+4.971958291" lastFinishedPulling="2025-08-13 07:17:21.400007637 +0000 UTC m=+7.252861589" observedRunningTime="2025-08-13 07:17:22.325170448 +0000 UTC m=+8.178024300" watchObservedRunningTime="2025-08-13 07:17:26.234664602 +0000 UTC m=+12.087518454" Aug 13 07:17:27.884213 sudo[2252]: pam_unix(sudo:session): session closed for user root Aug 13 07:17:27.988346 sshd[2249]: pam_unix(sshd:session): session closed for user core Aug 13 07:17:27.997806 systemd[1]: sshd@6-10.200.4.46:22-10.200.16.10:38150.service: Deactivated successfully. Aug 13 07:17:28.002192 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:17:28.002614 systemd[1]: session-9.scope: Consumed 5.217s CPU time, 156.6M memory peak, 0B memory swap peak. Aug 13 07:17:28.005458 systemd-logind[1689]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:17:28.006652 systemd-logind[1689]: Removed session 9. Aug 13 07:17:32.828116 systemd[1]: Created slice kubepods-besteffort-podee550af1_d2fd_41b6_b65a_6e6087d7837b.slice - libcontainer container kubepods-besteffort-podee550af1_d2fd_41b6_b65a_6e6087d7837b.slice. Aug 13 07:17:32.860618 kubelet[3215]: I0813 07:17:32.860565 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee550af1-d2fd-41b6-b65a-6e6087d7837b-tigera-ca-bundle\") pod \"calico-typha-bdf8d8f46-hs9km\" (UID: \"ee550af1-d2fd-41b6-b65a-6e6087d7837b\") " pod="calico-system/calico-typha-bdf8d8f46-hs9km" Aug 13 07:17:32.860618 kubelet[3215]: I0813 07:17:32.860614 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fc8\" (UniqueName: \"kubernetes.io/projected/ee550af1-d2fd-41b6-b65a-6e6087d7837b-kube-api-access-l5fc8\") pod \"calico-typha-bdf8d8f46-hs9km\" (UID: \"ee550af1-d2fd-41b6-b65a-6e6087d7837b\") " pod="calico-system/calico-typha-bdf8d8f46-hs9km" Aug 13 07:17:32.861131 kubelet[3215]: I0813 07:17:32.860642 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee550af1-d2fd-41b6-b65a-6e6087d7837b-typha-certs\") pod \"calico-typha-bdf8d8f46-hs9km\" (UID: \"ee550af1-d2fd-41b6-b65a-6e6087d7837b\") " pod="calico-system/calico-typha-bdf8d8f46-hs9km" Aug 13 07:17:33.134774 containerd[1720]: time="2025-08-13T07:17:33.134362545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bdf8d8f46-hs9km,Uid:ee550af1-d2fd-41b6-b65a-6e6087d7837b,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:33.197596 systemd[1]: Created slice kubepods-besteffort-pod3af9f24e_3c4b_408d_97ff_a4e9c2e959f7.slice - libcontainer container kubepods-besteffort-pod3af9f24e_3c4b_408d_97ff_a4e9c2e959f7.slice. Aug 13 07:17:33.205622 containerd[1720]: time="2025-08-13T07:17:33.205058364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:33.206190 containerd[1720]: time="2025-08-13T07:17:33.205963897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:33.206190 containerd[1720]: time="2025-08-13T07:17:33.205991598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:33.206190 containerd[1720]: time="2025-08-13T07:17:33.206093201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:33.242464 systemd[1]: Started cri-containerd-1660ccd2a6a704c71d9151cc2ea9e435d46427fbe0dc2cb2facb606845c7d525.scope - libcontainer container 1660ccd2a6a704c71d9151cc2ea9e435d46427fbe0dc2cb2facb606845c7d525. Aug 13 07:17:33.264171 kubelet[3215]: I0813 07:17:33.263794 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-node-certs\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264171 kubelet[3215]: I0813 07:17:33.263840 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-tigera-ca-bundle\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264171 kubelet[3215]: I0813 07:17:33.263867 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-flexvol-driver-host\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264171 kubelet[3215]: I0813 07:17:33.263891 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-xtables-lock\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264171 kubelet[3215]: I0813 07:17:33.263913 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbsw5\" (UniqueName: \"kubernetes.io/projected/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-kube-api-access-zbsw5\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264509 kubelet[3215]: I0813 07:17:33.263936 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-policysync\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264509 kubelet[3215]: I0813 07:17:33.263959 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-cni-net-dir\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264509 kubelet[3215]: I0813 07:17:33.263982 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-lib-modules\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264509 kubelet[3215]: I0813 07:17:33.264004 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-var-run-calico\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264509 kubelet[3215]: I0813 07:17:33.264025 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-cni-bin-dir\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264710 kubelet[3215]: I0813 07:17:33.264047 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-cni-log-dir\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.264710 kubelet[3215]: I0813 07:17:33.264072 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3af9f24e-3c4b-408d-97ff-a4e9c2e959f7-var-lib-calico\") pod \"calico-node-llv6f\" (UID: \"3af9f24e-3c4b-408d-97ff-a4e9c2e959f7\") " pod="calico-system/calico-node-llv6f" Aug 13 07:17:33.296195 containerd[1720]: time="2025-08-13T07:17:33.296152810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bdf8d8f46-hs9km,Uid:ee550af1-d2fd-41b6-b65a-6e6087d7837b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1660ccd2a6a704c71d9151cc2ea9e435d46427fbe0dc2cb2facb606845c7d525\"" Aug 13 07:17:33.297903 containerd[1720]: time="2025-08-13T07:17:33.297821670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:17:33.368749 kubelet[3215]: E0813 07:17:33.368709 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.368749 kubelet[3215]: W0813 07:17:33.368738 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.368961 kubelet[3215]: E0813 07:17:33.368795 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.369221 kubelet[3215]: E0813 07:17:33.369190 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.369221 kubelet[3215]: W0813 07:17:33.369212 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.369476 kubelet[3215]: E0813 07:17:33.369249 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.369566 kubelet[3215]: E0813 07:17:33.369546 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.369611 kubelet[3215]: W0813 07:17:33.369578 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.369611 kubelet[3215]: E0813 07:17:33.369603 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.370072 kubelet[3215]: E0813 07:17:33.370030 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.370072 kubelet[3215]: W0813 07:17:33.370056 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.370316 kubelet[3215]: E0813 07:17:33.370296 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.370554 kubelet[3215]: E0813 07:17:33.370535 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.370554 kubelet[3215]: W0813 07:17:33.370553 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.370666 kubelet[3215]: E0813 07:17:33.370603 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.371063 kubelet[3215]: E0813 07:17:33.371038 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.371063 kubelet[3215]: W0813 07:17:33.371061 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.371235 kubelet[3215]: E0813 07:17:33.371191 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.371529 kubelet[3215]: E0813 07:17:33.371504 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.371529 kubelet[3215]: W0813 07:17:33.371518 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.371813 kubelet[3215]: E0813 07:17:33.371789 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.372181 kubelet[3215]: E0813 07:17:33.372160 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.372181 kubelet[3215]: W0813 07:17:33.372180 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.372332 kubelet[3215]: E0813 07:17:33.372268 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.378209 kubelet[3215]: E0813 07:17:33.378185 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.378209 kubelet[3215]: W0813 07:17:33.378207 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.378620 kubelet[3215]: E0813 07:17:33.378418 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.379528 kubelet[3215]: E0813 07:17:33.379492 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.379528 kubelet[3215]: W0813 07:17:33.379527 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.381465 kubelet[3215]: E0813 07:17:33.381443 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.381465 kubelet[3215]: W0813 07:17:33.381464 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.382905 kubelet[3215]: E0813 07:17:33.382828 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.382905 kubelet[3215]: W0813 07:17:33.382844 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.382905 kubelet[3215]: E0813 07:17:33.382859 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.383709 kubelet[3215]: E0813 07:17:33.383687 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.383709 kubelet[3215]: W0813 07:17:33.383709 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.384080 kubelet[3215]: E0813 07:17:33.383725 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.384451 kubelet[3215]: E0813 07:17:33.384419 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.384451 kubelet[3215]: W0813 07:17:33.384438 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.384583 kubelet[3215]: E0813 07:17:33.384453 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.390197 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.393338 kubelet[3215]: W0813 07:17:33.390219 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.390234 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.390312 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.390620 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.393338 kubelet[3215]: W0813 07:17:33.390632 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.390645 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.390669 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.393338 kubelet[3215]: E0813 07:17:33.391041 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.393338 kubelet[3215]: W0813 07:17:33.391053 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.393777 kubelet[3215]: E0813 07:17:33.391126 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.393777 kubelet[3215]: E0813 07:17:33.391605 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.393777 kubelet[3215]: W0813 07:17:33.391617 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.393777 kubelet[3215]: E0813 07:17:33.391631 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.394325 kubelet[3215]: E0813 07:17:33.394303 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.394325 kubelet[3215]: W0813 07:17:33.394324 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.394486 kubelet[3215]: E0813 07:17:33.394340 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.444014 kubelet[3215]: E0813 07:17:33.443947 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:33.462353 kubelet[3215]: E0813 07:17:33.462308 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.462353 kubelet[3215]: W0813 07:17:33.462339 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.462563 kubelet[3215]: E0813 07:17:33.462365 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.464064 kubelet[3215]: E0813 07:17:33.464045 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.464064 kubelet[3215]: W0813 07:17:33.464065 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.464475 kubelet[3215]: E0813 07:17:33.464081 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.464617 kubelet[3215]: E0813 07:17:33.464597 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.464617 kubelet[3215]: W0813 07:17:33.464616 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.464730 kubelet[3215]: E0813 07:17:33.464632 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.465161 kubelet[3215]: E0813 07:17:33.465134 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.465161 kubelet[3215]: W0813 07:17:33.465151 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.465299 kubelet[3215]: E0813 07:17:33.465166 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.465636 kubelet[3215]: E0813 07:17:33.465575 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.465636 kubelet[3215]: W0813 07:17:33.465612 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.465636 kubelet[3215]: E0813 07:17:33.465628 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.466155 kubelet[3215]: E0813 07:17:33.466136 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.466155 kubelet[3215]: W0813 07:17:33.466156 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.466309 kubelet[3215]: E0813 07:17:33.466171 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.466672 kubelet[3215]: E0813 07:17:33.466643 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.466672 kubelet[3215]: W0813 07:17:33.466663 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.466894 kubelet[3215]: E0813 07:17:33.466749 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.467078 kubelet[3215]: E0813 07:17:33.467020 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.467078 kubelet[3215]: W0813 07:17:33.467034 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.467078 kubelet[3215]: E0813 07:17:33.467069 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.467863 kubelet[3215]: E0813 07:17:33.467462 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.467863 kubelet[3215]: W0813 07:17:33.467491 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.467863 kubelet[3215]: E0813 07:17:33.467506 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.467863 kubelet[3215]: E0813 07:17:33.467748 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.467863 kubelet[3215]: W0813 07:17:33.467760 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.467863 kubelet[3215]: E0813 07:17:33.467784 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.468152 kubelet[3215]: E0813 07:17:33.468058 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.468152 kubelet[3215]: W0813 07:17:33.468070 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.468152 kubelet[3215]: E0813 07:17:33.468106 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.468607 kubelet[3215]: E0813 07:17:33.468448 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.468607 kubelet[3215]: W0813 07:17:33.468463 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.468607 kubelet[3215]: E0813 07:17:33.468477 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.469440 kubelet[3215]: E0813 07:17:33.468858 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.469440 kubelet[3215]: W0813 07:17:33.468870 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.469440 kubelet[3215]: E0813 07:17:33.468884 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.469440 kubelet[3215]: E0813 07:17:33.469165 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.469440 kubelet[3215]: W0813 07:17:33.469176 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.469440 kubelet[3215]: E0813 07:17:33.469207 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.469698 kubelet[3215]: E0813 07:17:33.469478 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.469698 kubelet[3215]: W0813 07:17:33.469509 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.469698 kubelet[3215]: E0813 07:17:33.469523 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.469834 kubelet[3215]: E0813 07:17:33.469770 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.469834 kubelet[3215]: W0813 07:17:33.469780 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.469834 kubelet[3215]: E0813 07:17:33.469792 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.471144 kubelet[3215]: E0813 07:17:33.470163 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.471144 kubelet[3215]: W0813 07:17:33.470176 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.471144 kubelet[3215]: E0813 07:17:33.470188 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.471144 kubelet[3215]: E0813 07:17:33.470697 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.471144 kubelet[3215]: W0813 07:17:33.470710 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.471144 kubelet[3215]: E0813 07:17:33.470724 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.471144 kubelet[3215]: E0813 07:17:33.471097 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.471144 kubelet[3215]: W0813 07:17:33.471120 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.471144 kubelet[3215]: E0813 07:17:33.471135 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.471558 kubelet[3215]: E0813 07:17:33.471430 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.471558 kubelet[3215]: W0813 07:17:33.471442 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.471558 kubelet[3215]: E0813 07:17:33.471454 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.471828 kubelet[3215]: E0813 07:17:33.471804 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.471975 kubelet[3215]: W0813 07:17:33.471851 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.471975 kubelet[3215]: E0813 07:17:33.471867 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.471975 kubelet[3215]: I0813 07:17:33.471908 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6dea07cd-503b-45c7-8ebe-51b022e30cd4-socket-dir\") pod \"csi-node-driver-kngq7\" (UID: \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\") " pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:33.472347 kubelet[3215]: E0813 07:17:33.472172 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.472347 kubelet[3215]: W0813 07:17:33.472189 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.472347 kubelet[3215]: E0813 07:17:33.472203 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.472347 kubelet[3215]: I0813 07:17:33.472226 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6dea07cd-503b-45c7-8ebe-51b022e30cd4-registration-dir\") pod \"csi-node-driver-kngq7\" (UID: \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\") " pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:33.472553 kubelet[3215]: E0813 07:17:33.472459 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.472553 kubelet[3215]: W0813 07:17:33.472472 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.472553 kubelet[3215]: E0813 07:17:33.472486 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.472553 kubelet[3215]: I0813 07:17:33.472509 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dea07cd-503b-45c7-8ebe-51b022e30cd4-kubelet-dir\") pod \"csi-node-driver-kngq7\" (UID: \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\") " pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:33.473778 kubelet[3215]: E0813 07:17:33.473745 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.473857 kubelet[3215]: W0813 07:17:33.473782 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.473857 kubelet[3215]: E0813 07:17:33.473821 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.473857 kubelet[3215]: I0813 07:17:33.473846 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6dea07cd-503b-45c7-8ebe-51b022e30cd4-varrun\") pod \"csi-node-driver-kngq7\" (UID: \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\") " pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:33.474117 kubelet[3215]: E0813 07:17:33.474099 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.474182 kubelet[3215]: W0813 07:17:33.474132 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.474182 kubelet[3215]: E0813 07:17:33.474149 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.474427 kubelet[3215]: E0813 07:17:33.474411 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.474427 kubelet[3215]: W0813 07:17:33.474427 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.474523 kubelet[3215]: E0813 07:17:33.474441 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.474709 kubelet[3215]: E0813 07:17:33.474688 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.474774 kubelet[3215]: W0813 07:17:33.474722 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.474774 kubelet[3215]: E0813 07:17:33.474738 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.475295 kubelet[3215]: E0813 07:17:33.474922 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.475295 kubelet[3215]: W0813 07:17:33.474936 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.475295 kubelet[3215]: E0813 07:17:33.474949 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.475465 kubelet[3215]: E0813 07:17:33.475408 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.475465 kubelet[3215]: W0813 07:17:33.475420 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.475465 kubelet[3215]: E0813 07:17:33.475450 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.475828 kubelet[3215]: E0813 07:17:33.475671 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.475828 kubelet[3215]: W0813 07:17:33.475686 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.475828 kubelet[3215]: E0813 07:17:33.475724 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.475999 kubelet[3215]: E0813 07:17:33.475983 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.475999 kubelet[3215]: W0813 07:17:33.475994 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.476084 kubelet[3215]: E0813 07:17:33.476016 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.477230 kubelet[3215]: E0813 07:17:33.476252 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.477230 kubelet[3215]: W0813 07:17:33.476576 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.477230 kubelet[3215]: E0813 07:17:33.476591 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.477230 kubelet[3215]: I0813 07:17:33.476653 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbdl\" (UniqueName: \"kubernetes.io/projected/6dea07cd-503b-45c7-8ebe-51b022e30cd4-kube-api-access-pdbdl\") pod \"csi-node-driver-kngq7\" (UID: \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\") " pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:33.477230 kubelet[3215]: E0813 07:17:33.476969 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.477230 kubelet[3215]: W0813 07:17:33.476981 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.477230 kubelet[3215]: E0813 07:17:33.476995 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.477650 kubelet[3215]: E0813 07:17:33.477376 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.477650 kubelet[3215]: W0813 07:17:33.477388 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.477650 kubelet[3215]: E0813 07:17:33.477401 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.477771 kubelet[3215]: E0813 07:17:33.477672 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.477771 kubelet[3215]: W0813 07:17:33.477684 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.477771 kubelet[3215]: E0813 07:17:33.477714 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.506315 containerd[1720]: time="2025-08-13T07:17:33.505954814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llv6f,Uid:3af9f24e-3c4b-408d-97ff-a4e9c2e959f7,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:33.567669 containerd[1720]: time="2025-08-13T07:17:33.567534953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:33.567669 containerd[1720]: time="2025-08-13T07:17:33.567607056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:33.567669 containerd[1720]: time="2025-08-13T07:17:33.567629856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:33.569567 containerd[1720]: time="2025-08-13T07:17:33.567736960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:33.578861 kubelet[3215]: E0813 07:17:33.578546 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.579269 kubelet[3215]: W0813 07:17:33.579116 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.579737 kubelet[3215]: E0813 07:17:33.579702 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.580990 kubelet[3215]: E0813 07:17:33.580884 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.580990 kubelet[3215]: W0813 07:17:33.580901 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.580990 kubelet[3215]: E0813 07:17:33.580943 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.582724 kubelet[3215]: E0813 07:17:33.582607 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.582724 kubelet[3215]: W0813 07:17:33.582624 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.582724 kubelet[3215]: E0813 07:17:33.582668 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.584207 kubelet[3215]: E0813 07:17:33.583926 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.584207 kubelet[3215]: W0813 07:17:33.583943 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.584207 kubelet[3215]: E0813 07:17:33.584069 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.588035 kubelet[3215]: E0813 07:17:33.586940 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.588035 kubelet[3215]: W0813 07:17:33.586956 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.588035 kubelet[3215]: E0813 07:17:33.587103 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.588557 kubelet[3215]: E0813 07:17:33.588485 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.589184 kubelet[3215]: W0813 07:17:33.588882 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.589696 kubelet[3215]: E0813 07:17:33.589539 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.590877 kubelet[3215]: E0813 07:17:33.590304 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.590877 kubelet[3215]: W0813 07:17:33.590503 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.591439 kubelet[3215]: E0813 07:17:33.591287 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.591982 kubelet[3215]: E0813 07:17:33.591827 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.591982 kubelet[3215]: W0813 07:17:33.591847 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.592873 kubelet[3215]: E0813 07:17:33.592386 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.592873 kubelet[3215]: E0813 07:17:33.592840 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.592873 kubelet[3215]: W0813 07:17:33.592855 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.594136 kubelet[3215]: E0813 07:17:33.593670 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.594136 kubelet[3215]: E0813 07:17:33.594102 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.594136 kubelet[3215]: W0813 07:17:33.594116 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.594741 kubelet[3215]: E0813 07:17:33.594717 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.595881 kubelet[3215]: E0813 07:17:33.595859 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.595881 kubelet[3215]: W0813 07:17:33.595874 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.596158 kubelet[3215]: E0813 07:17:33.596136 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.596158 kubelet[3215]: W0813 07:17:33.596151 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.596437 kubelet[3215]: E0813 07:17:33.596416 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.596437 kubelet[3215]: W0813 07:17:33.596430 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.597086 kubelet[3215]: E0813 07:17:33.597065 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.597086 kubelet[3215]: W0813 07:17:33.597085 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.597571 kubelet[3215]: E0813 07:17:33.597100 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.598412 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.599775 kubelet[3215]: W0813 07:17:33.598428 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.598443 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.599087 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.599775 kubelet[3215]: W0813 07:17:33.599100 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.599114 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.599145 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.599526 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.599775 kubelet[3215]: W0813 07:17:33.599539 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.599775 kubelet[3215]: E0813 07:17:33.599552 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.598478 systemd[1]: Started cri-containerd-3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90.scope - libcontainer container 3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90. Aug 13 07:17:33.601441 kubelet[3215]: E0813 07:17:33.599919 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.601441 kubelet[3215]: W0813 07:17:33.599931 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.601441 kubelet[3215]: E0813 07:17:33.599945 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.601441 kubelet[3215]: E0813 07:17:33.599990 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.601441 kubelet[3215]: E0813 07:17:33.600527 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.601441 kubelet[3215]: W0813 07:17:33.600541 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.601441 kubelet[3215]: E0813 07:17:33.600557 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.602298 kubelet[3215]: E0813 07:17:33.601941 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.602298 kubelet[3215]: W0813 07:17:33.601957 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.602298 kubelet[3215]: E0813 07:17:33.602021 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.602564 kubelet[3215]: E0813 07:17:33.602547 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.602564 kubelet[3215]: W0813 07:17:33.602561 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.602685 kubelet[3215]: E0813 07:17:33.602575 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.603216 kubelet[3215]: E0813 07:17:33.602812 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.603216 kubelet[3215]: W0813 07:17:33.602833 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.603216 kubelet[3215]: E0813 07:17:33.602847 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.603216 kubelet[3215]: E0813 07:17:33.603090 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.603547 kubelet[3215]: E0813 07:17:33.603533 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.603634 kubelet[3215]: W0813 07:17:33.603622 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.603780 kubelet[3215]: E0813 07:17:33.603706 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.604053 kubelet[3215]: E0813 07:17:33.604038 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.604230 kubelet[3215]: W0813 07:17:33.604112 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.604230 kubelet[3215]: E0813 07:17:33.604131 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.604785 kubelet[3215]: E0813 07:17:33.604703 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.604785 kubelet[3215]: W0813 07:17:33.604719 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.604785 kubelet[3215]: E0813 07:17:33.604734 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.653036 kubelet[3215]: E0813 07:17:33.652444 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:33.653036 kubelet[3215]: W0813 07:17:33.652467 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:33.653036 kubelet[3215]: E0813 07:17:33.652500 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:33.658580 containerd[1720]: time="2025-08-13T07:17:33.658434617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-llv6f,Uid:3af9f24e-3c4b-408d-97ff-a4e9c2e959f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\"" Aug 13 07:17:34.612623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573175752.mount: Deactivated successfully. Aug 13 07:17:35.253589 kubelet[3215]: E0813 07:17:35.253533 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:35.851723 containerd[1720]: time="2025-08-13T07:17:35.850816076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:35.853569 containerd[1720]: time="2025-08-13T07:17:35.853526862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:17:35.856606 containerd[1720]: time="2025-08-13T07:17:35.856545757Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:35.863684 containerd[1720]: time="2025-08-13T07:17:35.863637580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:35.870578 containerd[1720]: time="2025-08-13T07:17:35.870530397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.572658326s" Aug 13 07:17:35.870705 containerd[1720]: time="2025-08-13T07:17:35.870586399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:17:35.878102 containerd[1720]: time="2025-08-13T07:17:35.877637121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:17:35.895835 containerd[1720]: time="2025-08-13T07:17:35.894519653Z" level=info msg="CreateContainer within sandbox \"1660ccd2a6a704c71d9151cc2ea9e435d46427fbe0dc2cb2facb606845c7d525\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:17:35.947935 containerd[1720]: time="2025-08-13T07:17:35.947884134Z" level=info msg="CreateContainer within sandbox \"1660ccd2a6a704c71d9151cc2ea9e435d46427fbe0dc2cb2facb606845c7d525\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"067a636b198b007e57101fcaae7a9b11f3b59bad051c274ff83031e7ced3d94b\"" Aug 13 07:17:35.948950 containerd[1720]: time="2025-08-13T07:17:35.948804763Z" level=info msg="StartContainer for \"067a636b198b007e57101fcaae7a9b11f3b59bad051c274ff83031e7ced3d94b\"" Aug 13 07:17:35.985528 systemd[1]: Started cri-containerd-067a636b198b007e57101fcaae7a9b11f3b59bad051c274ff83031e7ced3d94b.scope - libcontainer container 067a636b198b007e57101fcaae7a9b11f3b59bad051c274ff83031e7ced3d94b. Aug 13 07:17:36.050358 containerd[1720]: time="2025-08-13T07:17:36.050292060Z" level=info msg="StartContainer for \"067a636b198b007e57101fcaae7a9b11f3b59bad051c274ff83031e7ced3d94b\" returns successfully" Aug 13 07:17:36.394499 kubelet[3215]: E0813 07:17:36.394376 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.394499 kubelet[3215]: W0813 07:17:36.394408 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.394499 kubelet[3215]: E0813 07:17:36.394433 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.395575 kubelet[3215]: E0813 07:17:36.395302 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.395575 kubelet[3215]: W0813 07:17:36.395322 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.395575 kubelet[3215]: E0813 07:17:36.395342 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.395575 kubelet[3215]: E0813 07:17:36.395646 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.395575 kubelet[3215]: W0813 07:17:36.395660 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.395575 kubelet[3215]: E0813 07:17:36.395675 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.397518 kubelet[3215]: E0813 07:17:36.397338 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.397518 kubelet[3215]: W0813 07:17:36.397363 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.397518 kubelet[3215]: E0813 07:17:36.397379 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.398137 kubelet[3215]: E0813 07:17:36.397628 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.398137 kubelet[3215]: W0813 07:17:36.397656 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.398137 kubelet[3215]: E0813 07:17:36.397671 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.398137 kubelet[3215]: E0813 07:17:36.397894 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.398137 kubelet[3215]: W0813 07:17:36.397906 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.398137 kubelet[3215]: E0813 07:17:36.397918 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.398137 kubelet[3215]: E0813 07:17:36.398139 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.398598 kubelet[3215]: W0813 07:17:36.398151 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.398598 kubelet[3215]: E0813 07:17:36.398164 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.398598 kubelet[3215]: E0813 07:17:36.398402 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.398598 kubelet[3215]: W0813 07:17:36.398414 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.398598 kubelet[3215]: E0813 07:17:36.398427 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.398816 kubelet[3215]: E0813 07:17:36.398654 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.398816 kubelet[3215]: W0813 07:17:36.398665 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.398816 kubelet[3215]: E0813 07:17:36.398677 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.398953 kubelet[3215]: E0813 07:17:36.398870 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.398953 kubelet[3215]: W0813 07:17:36.398880 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.398953 kubelet[3215]: E0813 07:17:36.398891 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.399831 kubelet[3215]: E0813 07:17:36.399092 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.399831 kubelet[3215]: W0813 07:17:36.399108 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.399831 kubelet[3215]: E0813 07:17:36.399120 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.399831 kubelet[3215]: E0813 07:17:36.399356 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.399831 kubelet[3215]: W0813 07:17:36.399367 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.399831 kubelet[3215]: E0813 07:17:36.399381 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.399831 kubelet[3215]: E0813 07:17:36.399599 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.399831 kubelet[3215]: W0813 07:17:36.399610 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.399831 kubelet[3215]: E0813 07:17:36.399622 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.402558 kubelet[3215]: E0813 07:17:36.402451 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.402558 kubelet[3215]: W0813 07:17:36.402467 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.402558 kubelet[3215]: E0813 07:17:36.402481 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.402763 kubelet[3215]: E0813 07:17:36.402696 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.402763 kubelet[3215]: W0813 07:17:36.402708 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.402763 kubelet[3215]: E0813 07:17:36.402722 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.418244 kubelet[3215]: E0813 07:17:36.418219 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.418244 kubelet[3215]: W0813 07:17:36.418242 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.418424 kubelet[3215]: E0813 07:17:36.418274 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.418855 kubelet[3215]: E0813 07:17:36.418632 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.418855 kubelet[3215]: W0813 07:17:36.418649 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.418855 kubelet[3215]: E0813 07:17:36.418664 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.419022 kubelet[3215]: E0813 07:17:36.418947 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.419022 kubelet[3215]: W0813 07:17:36.418960 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.419022 kubelet[3215]: E0813 07:17:36.418990 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.422691 kubelet[3215]: E0813 07:17:36.419405 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.422691 kubelet[3215]: W0813 07:17:36.422620 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.422691 kubelet[3215]: E0813 07:17:36.422665 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.423018 kubelet[3215]: E0813 07:17:36.422990 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.423018 kubelet[3215]: W0813 07:17:36.423016 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.423121 kubelet[3215]: E0813 07:17:36.423090 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.423508 kubelet[3215]: E0813 07:17:36.423343 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.423508 kubelet[3215]: W0813 07:17:36.423358 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.423508 kubelet[3215]: E0813 07:17:36.423445 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.423677 kubelet[3215]: E0813 07:17:36.423620 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.423677 kubelet[3215]: W0813 07:17:36.423631 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.423763 kubelet[3215]: E0813 07:17:36.423715 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.424219 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.425289 kubelet[3215]: W0813 07:17:36.424245 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.424290 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.424605 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.425289 kubelet[3215]: W0813 07:17:36.424618 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.424656 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.424934 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.425289 kubelet[3215]: W0813 07:17:36.424945 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.424967 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.425289 kubelet[3215]: E0813 07:17:36.425205 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.425739 kubelet[3215]: W0813 07:17:36.425217 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.425739 kubelet[3215]: E0813 07:17:36.425230 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.425739 kubelet[3215]: E0813 07:17:36.425555 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.425739 kubelet[3215]: W0813 07:17:36.425567 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.425739 kubelet[3215]: E0813 07:17:36.425599 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.426401 kubelet[3215]: E0813 07:17:36.426380 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.426401 kubelet[3215]: W0813 07:17:36.426398 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.426570 kubelet[3215]: E0813 07:17:36.426412 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.426791 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.428280 kubelet[3215]: W0813 07:17:36.426810 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.426851 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.427221 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.428280 kubelet[3215]: W0813 07:17:36.427235 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.427313 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.427675 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.428280 kubelet[3215]: W0813 07:17:36.427687 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.427708 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.428280 kubelet[3215]: E0813 07:17:36.428007 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.428686 kubelet[3215]: W0813 07:17:36.428019 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.428686 kubelet[3215]: E0813 07:17:36.428032 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:36.428686 kubelet[3215]: E0813 07:17:36.428498 3215 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:17:36.428686 kubelet[3215]: W0813 07:17:36.428510 3215 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:17:36.428686 kubelet[3215]: E0813 07:17:36.428543 3215 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:17:37.176230 containerd[1720]: time="2025-08-13T07:17:37.176085022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:37.181291 containerd[1720]: time="2025-08-13T07:17:37.180120749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:17:37.183860 containerd[1720]: time="2025-08-13T07:17:37.183818165Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:37.191026 containerd[1720]: time="2025-08-13T07:17:37.190987591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:37.192443 containerd[1720]: time="2025-08-13T07:17:37.192406036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.314707013s" Aug 13 07:17:37.192530 containerd[1720]: time="2025-08-13T07:17:37.192446537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:17:37.195907 containerd[1720]: time="2025-08-13T07:17:37.195871145Z" level=info msg="CreateContainer within sandbox \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:17:37.239842 containerd[1720]: time="2025-08-13T07:17:37.239787028Z" level=info msg="CreateContainer within sandbox \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93\"" Aug 13 07:17:37.240642 containerd[1720]: time="2025-08-13T07:17:37.240603754Z" level=info msg="StartContainer for \"f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93\"" Aug 13 07:17:37.256717 kubelet[3215]: E0813 07:17:37.254561 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:37.322447 systemd[1]: Started cri-containerd-f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93.scope - libcontainer container f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93. Aug 13 07:17:37.360188 kubelet[3215]: I0813 07:17:37.360148 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:37.398738 containerd[1720]: time="2025-08-13T07:17:37.398647532Z" level=info msg="StartContainer for \"f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93\" returns successfully" Aug 13 07:17:37.408481 systemd[1]: cri-containerd-f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93.scope: Deactivated successfully. Aug 13 07:17:37.882812 systemd[1]: run-containerd-runc-k8s.io-f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93-runc.sjL58O.mount: Deactivated successfully. Aug 13 07:17:37.882949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93-rootfs.mount: Deactivated successfully. Aug 13 07:17:38.388438 kubelet[3215]: I0813 07:17:38.388315 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bdf8d8f46-hs9km" podStartSLOduration=3.81092513 podStartE2EDuration="6.388290706s" podCreationTimestamp="2025-08-13 07:17:32 +0000 UTC" firstStartedPulling="2025-08-13 07:17:33.297537159 +0000 UTC m=+19.150391011" lastFinishedPulling="2025-08-13 07:17:35.874902735 +0000 UTC m=+21.727756587" observedRunningTime="2025-08-13 07:17:36.385970333 +0000 UTC m=+22.238824285" watchObservedRunningTime="2025-08-13 07:17:38.388290706 +0000 UTC m=+24.241144558" Aug 13 07:17:38.941756 containerd[1720]: time="2025-08-13T07:17:38.941600635Z" level=info msg="shim disconnected" id=f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93 namespace=k8s.io Aug 13 07:17:38.941756 containerd[1720]: time="2025-08-13T07:17:38.941748340Z" level=warning msg="cleaning up after shim disconnected" id=f9783c7cc02d203f6d5f2a6837a111fb28c70d4c2e691bd91592e335418abf93 namespace=k8s.io Aug 13 07:17:38.941756 containerd[1720]: time="2025-08-13T07:17:38.941761540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:17:39.253057 kubelet[3215]: E0813 07:17:39.252926 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:39.371736 containerd[1720]: time="2025-08-13T07:17:39.371617533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:17:41.252919 kubelet[3215]: E0813 07:17:41.252869 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:43.253177 kubelet[3215]: E0813 07:17:43.253061 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:44.318514 containerd[1720]: time="2025-08-13T07:17:44.318456587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:44.321882 containerd[1720]: time="2025-08-13T07:17:44.321707174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:17:44.325079 containerd[1720]: time="2025-08-13T07:17:44.324524050Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:44.329046 containerd[1720]: time="2025-08-13T07:17:44.329010970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:44.330215 containerd[1720]: time="2025-08-13T07:17:44.329835792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.958163157s" Aug 13 07:17:44.330215 containerd[1720]: time="2025-08-13T07:17:44.329869093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:17:44.333476 containerd[1720]: time="2025-08-13T07:17:44.333441489Z" level=info msg="CreateContainer within sandbox \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:17:44.381061 containerd[1720]: time="2025-08-13T07:17:44.381012165Z" level=info msg="CreateContainer within sandbox \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6\"" Aug 13 07:17:44.383279 containerd[1720]: time="2025-08-13T07:17:44.382528006Z" level=info msg="StartContainer for \"523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6\"" Aug 13 07:17:44.444988 systemd[1]: Started cri-containerd-523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6.scope - libcontainer container 523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6. Aug 13 07:17:44.479414 containerd[1720]: time="2025-08-13T07:17:44.479364104Z" level=info msg="StartContainer for \"523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6\" returns successfully" Aug 13 07:17:45.253168 kubelet[3215]: E0813 07:17:45.253056 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:46.114066 containerd[1720]: time="2025-08-13T07:17:46.114000774Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:17:46.118610 systemd[1]: cri-containerd-523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6.scope: Deactivated successfully. Aug 13 07:17:46.141904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6-rootfs.mount: Deactivated successfully. Aug 13 07:17:46.196075 kubelet[3215]: I0813 07:17:46.195670 3215 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 07:17:46.736373 kubelet[3215]: I0813 07:17:46.292533 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whl7n\" (UniqueName: \"kubernetes.io/projected/b6d30009-e3c1-496f-8ea4-de2a0c63018b-kube-api-access-whl7n\") pod \"calico-apiserver-5cdd967ff-7cwjt\" (UID: \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\") " pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" Aug 13 07:17:46.736373 kubelet[3215]: I0813 07:17:46.292576 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gz8\" (UniqueName: \"kubernetes.io/projected/3bcaff83-98f1-4f1e-9ec2-0de878c93569-kube-api-access-v5gz8\") pod \"coredns-668d6bf9bc-dfzv4\" (UID: \"3bcaff83-98f1-4f1e-9ec2-0de878c93569\") " pod="kube-system/coredns-668d6bf9bc-dfzv4" Aug 13 07:17:46.736373 kubelet[3215]: I0813 07:17:46.292601 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2jd\" (UniqueName: \"kubernetes.io/projected/5ab325b6-c552-42c8-a448-2c9835fe41c3-kube-api-access-zm2jd\") pod \"calico-kube-controllers-65d98d4c87-tmh2g\" (UID: \"5ab325b6-c552-42c8-a448-2c9835fe41c3\") " pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" Aug 13 07:17:46.736373 kubelet[3215]: I0813 07:17:46.292625 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6d30009-e3c1-496f-8ea4-de2a0c63018b-calico-apiserver-certs\") pod \"calico-apiserver-5cdd967ff-7cwjt\" (UID: \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\") " pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" Aug 13 07:17:46.736373 kubelet[3215]: I0813 07:17:46.292653 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab325b6-c552-42c8-a448-2c9835fe41c3-tigera-ca-bundle\") pod \"calico-kube-controllers-65d98d4c87-tmh2g\" (UID: \"5ab325b6-c552-42c8-a448-2c9835fe41c3\") " pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" Aug 13 07:17:46.254194 systemd[1]: Created slice kubepods-burstable-pod3bcaff83_98f1_4f1e_9ec2_0de878c93569.slice - libcontainer container kubepods-burstable-pod3bcaff83_98f1_4f1e_9ec2_0de878c93569.slice. Aug 13 07:17:46.737106 kubelet[3215]: I0813 07:17:46.292675 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5fa15d-dd4b-47f2-8c06-e769c8807083-config-volume\") pod \"coredns-668d6bf9bc-672s8\" (UID: \"fd5fa15d-dd4b-47f2-8c06-e769c8807083\") " pod="kube-system/coredns-668d6bf9bc-672s8" Aug 13 07:17:46.737106 kubelet[3215]: I0813 07:17:46.292698 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcaff83-98f1-4f1e-9ec2-0de878c93569-config-volume\") pod \"coredns-668d6bf9bc-dfzv4\" (UID: \"3bcaff83-98f1-4f1e-9ec2-0de878c93569\") " pod="kube-system/coredns-668d6bf9bc-dfzv4" Aug 13 07:17:46.737106 kubelet[3215]: I0813 07:17:46.292719 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9kf\" (UniqueName: \"kubernetes.io/projected/fd5fa15d-dd4b-47f2-8c06-e769c8807083-kube-api-access-2x9kf\") pod \"coredns-668d6bf9bc-672s8\" (UID: \"fd5fa15d-dd4b-47f2-8c06-e769c8807083\") " pod="kube-system/coredns-668d6bf9bc-672s8" Aug 13 07:17:46.283097 systemd[1]: Created slice kubepods-besteffort-pod5ab325b6_c552_42c8_a448_2c9835fe41c3.slice - libcontainer container kubepods-besteffort-pod5ab325b6_c552_42c8_a448_2c9835fe41c3.slice. Aug 13 07:17:46.295628 systemd[1]: Created slice kubepods-burstable-podfd5fa15d_dd4b_47f2_8c06_e769c8807083.slice - libcontainer container kubepods-burstable-podfd5fa15d_dd4b_47f2_8c06_e769c8807083.slice. Aug 13 07:17:46.305316 systemd[1]: Created slice kubepods-besteffort-podb6d30009_e3c1_496f_8ea4_de2a0c63018b.slice - libcontainer container kubepods-besteffort-podb6d30009_e3c1_496f_8ea4_de2a0c63018b.slice. Aug 13 07:17:46.796783 kubelet[3215]: I0813 07:17:46.796688 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d895fcd6-d479-4f4e-87f8-3b6aee927688-calico-apiserver-certs\") pod \"calico-apiserver-5cdd967ff-rqqwz\" (UID: \"d895fcd6-d479-4f4e-87f8-3b6aee927688\") " pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" Aug 13 07:17:46.796929 kubelet[3215]: I0813 07:17:46.796793 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0d2537cf-0c17-4fe1-83ab-ece63f331986-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8g2b7\" (UID: \"0d2537cf-0c17-4fe1-83ab-ece63f331986\") " pod="calico-system/goldmane-768f4c5c69-8g2b7" Aug 13 07:17:46.796929 kubelet[3215]: I0813 07:17:46.796832 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jht\" (UniqueName: \"kubernetes.io/projected/31ff16bd-65fa-4475-be19-58aa527037ea-kube-api-access-r5jht\") pod \"calico-apiserver-966bb757f-8qwrf\" (UID: \"31ff16bd-65fa-4475-be19-58aa527037ea\") " pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" Aug 13 07:17:46.796929 kubelet[3215]: I0813 07:17:46.796882 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2537cf-0c17-4fe1-83ab-ece63f331986-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8g2b7\" (UID: \"0d2537cf-0c17-4fe1-83ab-ece63f331986\") " pod="calico-system/goldmane-768f4c5c69-8g2b7" Aug 13 07:17:46.796929 kubelet[3215]: I0813 07:17:46.796913 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6sj\" (UniqueName: \"kubernetes.io/projected/d895fcd6-d479-4f4e-87f8-3b6aee927688-kube-api-access-pr6sj\") pod \"calico-apiserver-5cdd967ff-rqqwz\" (UID: \"d895fcd6-d479-4f4e-87f8-3b6aee927688\") " pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" Aug 13 07:17:46.797115 kubelet[3215]: I0813 07:17:46.796942 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/069f376f-0179-4789-bdb3-d836086aef24-whisker-ca-bundle\") pod \"whisker-7dc9959b6b-zhz6g\" (UID: \"069f376f-0179-4789-bdb3-d836086aef24\") " pod="calico-system/whisker-7dc9959b6b-zhz6g" Aug 13 07:17:46.797115 kubelet[3215]: I0813 07:17:46.797050 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/069f376f-0179-4789-bdb3-d836086aef24-whisker-backend-key-pair\") pod \"whisker-7dc9959b6b-zhz6g\" (UID: \"069f376f-0179-4789-bdb3-d836086aef24\") " pod="calico-system/whisker-7dc9959b6b-zhz6g" Aug 13 07:17:46.797115 kubelet[3215]: I0813 07:17:46.797086 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tq4\" (UniqueName: \"kubernetes.io/projected/0d2537cf-0c17-4fe1-83ab-ece63f331986-kube-api-access-48tq4\") pod \"goldmane-768f4c5c69-8g2b7\" (UID: \"0d2537cf-0c17-4fe1-83ab-ece63f331986\") " pod="calico-system/goldmane-768f4c5c69-8g2b7" Aug 13 07:17:46.798685 kubelet[3215]: I0813 07:17:46.797119 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9gs\" (UniqueName: \"kubernetes.io/projected/069f376f-0179-4789-bdb3-d836086aef24-kube-api-access-sj9gs\") pod \"whisker-7dc9959b6b-zhz6g\" (UID: \"069f376f-0179-4789-bdb3-d836086aef24\") " pod="calico-system/whisker-7dc9959b6b-zhz6g" Aug 13 07:17:46.798685 kubelet[3215]: I0813 07:17:46.797199 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31ff16bd-65fa-4475-be19-58aa527037ea-calico-apiserver-certs\") pod \"calico-apiserver-966bb757f-8qwrf\" (UID: \"31ff16bd-65fa-4475-be19-58aa527037ea\") " pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" Aug 13 07:17:46.798685 kubelet[3215]: I0813 07:17:46.797248 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d2537cf-0c17-4fe1-83ab-ece63f331986-config\") pod \"goldmane-768f4c5c69-8g2b7\" (UID: \"0d2537cf-0c17-4fe1-83ab-ece63f331986\") " pod="calico-system/goldmane-768f4c5c69-8g2b7" Aug 13 07:17:46.810349 systemd[1]: Created slice kubepods-besteffort-podd895fcd6_d479_4f4e_87f8_3b6aee927688.slice - libcontainer container kubepods-besteffort-podd895fcd6_d479_4f4e_87f8_3b6aee927688.slice. Aug 13 07:17:46.820369 systemd[1]: Created slice kubepods-besteffort-pod31ff16bd_65fa_4475_be19_58aa527037ea.slice - libcontainer container kubepods-besteffort-pod31ff16bd_65fa_4475_be19_58aa527037ea.slice. Aug 13 07:17:46.830296 systemd[1]: Created slice kubepods-besteffort-pod069f376f_0179_4789_bdb3_d836086aef24.slice - libcontainer container kubepods-besteffort-pod069f376f_0179_4789_bdb3_d836086aef24.slice. Aug 13 07:17:46.837963 systemd[1]: Created slice kubepods-besteffort-pod0d2537cf_0c17_4fe1_83ab_ece63f331986.slice - libcontainer container kubepods-besteffort-pod0d2537cf_0c17_4fe1_83ab_ece63f331986.slice. Aug 13 07:17:47.040956 containerd[1720]: time="2025-08-13T07:17:47.040910257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfzv4,Uid:3bcaff83-98f1-4f1e-9ec2-0de878c93569,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:47.043366 containerd[1720]: time="2025-08-13T07:17:47.043333041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d98d4c87-tmh2g,Uid:5ab325b6-c552-42c8-a448-2c9835fe41c3,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:47.043628 containerd[1720]: time="2025-08-13T07:17:47.043589450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-672s8,Uid:fd5fa15d-dd4b-47f2-8c06-e769c8807083,Namespace:kube-system,Attempt:0,}" Aug 13 07:17:47.061017 containerd[1720]: time="2025-08-13T07:17:47.060981154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-7cwjt,Uid:b6d30009-e3c1-496f-8ea4-de2a0c63018b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:47.117077 containerd[1720]: time="2025-08-13T07:17:47.117020799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-rqqwz,Uid:d895fcd6-d479-4f4e-87f8-3b6aee927688,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:47.128368 containerd[1720]: time="2025-08-13T07:17:47.128321692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966bb757f-8qwrf,Uid:31ff16bd-65fa-4475-be19-58aa527037ea,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:17:47.134388 containerd[1720]: time="2025-08-13T07:17:47.134354301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc9959b6b-zhz6g,Uid:069f376f-0179-4789-bdb3-d836086aef24,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:47.142289 containerd[1720]: time="2025-08-13T07:17:47.140885428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8g2b7,Uid:0d2537cf-0c17-4fe1-83ab-ece63f331986,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:47.259732 systemd[1]: Created slice kubepods-besteffort-pod6dea07cd_503b_45c7_8ebe_51b022e30cd4.slice - libcontainer container kubepods-besteffort-pod6dea07cd_503b_45c7_8ebe_51b022e30cd4.slice. Aug 13 07:17:47.262217 containerd[1720]: time="2025-08-13T07:17:47.262182140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kngq7,Uid:6dea07cd-503b-45c7-8ebe-51b022e30cd4,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:47.394787 containerd[1720]: time="2025-08-13T07:17:47.394631138Z" level=info msg="shim disconnected" id=523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6 namespace=k8s.io Aug 13 07:17:47.394787 containerd[1720]: time="2025-08-13T07:17:47.394684640Z" level=warning msg="cleaning up after shim disconnected" id=523cd03d50232b5ee8c2ca2d2de99e3bdf3ae563cd2081bca73f828e6353fad6 namespace=k8s.io Aug 13 07:17:47.394787 containerd[1720]: time="2025-08-13T07:17:47.394695640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:17:47.882308 containerd[1720]: time="2025-08-13T07:17:47.882241868Z" level=error msg="Failed to destroy network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.882671 containerd[1720]: time="2025-08-13T07:17:47.882633182Z" level=error msg="encountered an error cleaning up failed sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.882783 containerd[1720]: time="2025-08-13T07:17:47.882701284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-7cwjt,Uid:b6d30009-e3c1-496f-8ea4-de2a0c63018b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.887766 kubelet[3215]: E0813 07:17:47.887717 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.888310 kubelet[3215]: E0813 07:17:47.888280 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" Aug 13 07:17:47.888452 kubelet[3215]: E0813 07:17:47.888430 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" Aug 13 07:17:47.888823 kubelet[3215]: E0813 07:17:47.888747 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cdd967ff-7cwjt_calico-apiserver(b6d30009-e3c1-496f-8ea4-de2a0c63018b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cdd967ff-7cwjt_calico-apiserver(b6d30009-e3c1-496f-8ea4-de2a0c63018b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" podUID="b6d30009-e3c1-496f-8ea4-de2a0c63018b" Aug 13 07:17:47.979499 containerd[1720]: time="2025-08-13T07:17:47.979424142Z" level=error msg="Failed to destroy network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.981273 containerd[1720]: time="2025-08-13T07:17:47.980956896Z" level=error msg="encountered an error cleaning up failed sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.984353 containerd[1720]: time="2025-08-13T07:17:47.981536016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d98d4c87-tmh2g,Uid:5ab325b6-c552-42c8-a448-2c9835fe41c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.984892 kubelet[3215]: E0813 07:17:47.984844 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:47.987501 kubelet[3215]: E0813 07:17:47.986349 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" Aug 13 07:17:47.987501 kubelet[3215]: E0813 07:17:47.986395 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" Aug 13 07:17:47.987501 kubelet[3215]: E0813 07:17:47.986461 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65d98d4c87-tmh2g_calico-system(5ab325b6-c552-42c8-a448-2c9835fe41c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65d98d4c87-tmh2g_calico-system(5ab325b6-c552-42c8-a448-2c9835fe41c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" podUID="5ab325b6-c552-42c8-a448-2c9835fe41c3" Aug 13 07:17:48.002267 containerd[1720]: time="2025-08-13T07:17:48.002217034Z" level=error msg="Failed to destroy network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.003878 containerd[1720]: time="2025-08-13T07:17:48.003660584Z" level=error msg="encountered an error cleaning up failed sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.004075 containerd[1720]: time="2025-08-13T07:17:48.004034897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfzv4,Uid:3bcaff83-98f1-4f1e-9ec2-0de878c93569,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.004707 kubelet[3215]: E0813 07:17:48.004674 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.005425 kubelet[3215]: E0813 07:17:48.004854 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfzv4" Aug 13 07:17:48.005425 kubelet[3215]: E0813 07:17:48.004894 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfzv4" Aug 13 07:17:48.005875 kubelet[3215]: E0813 07:17:48.004959 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfzv4_kube-system(3bcaff83-98f1-4f1e-9ec2-0de878c93569)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfzv4_kube-system(3bcaff83-98f1-4f1e-9ec2-0de878c93569)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfzv4" podUID="3bcaff83-98f1-4f1e-9ec2-0de878c93569" Aug 13 07:17:48.009944 containerd[1720]: time="2025-08-13T07:17:48.009908101Z" level=error msg="Failed to destroy network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.010306 containerd[1720]: time="2025-08-13T07:17:48.010270914Z" level=error msg="encountered an error cleaning up failed sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.010383 containerd[1720]: time="2025-08-13T07:17:48.010329616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kngq7,Uid:6dea07cd-503b-45c7-8ebe-51b022e30cd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.010848 kubelet[3215]: E0813 07:17:48.010576 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.010848 kubelet[3215]: E0813 07:17:48.010623 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:48.010848 kubelet[3215]: E0813 07:17:48.010648 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kngq7" Aug 13 07:17:48.011017 kubelet[3215]: E0813 07:17:48.010694 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kngq7_calico-system(6dea07cd-503b-45c7-8ebe-51b022e30cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kngq7_calico-system(6dea07cd-503b-45c7-8ebe-51b022e30cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:48.014286 containerd[1720]: time="2025-08-13T07:17:48.013729234Z" level=error msg="Failed to destroy network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.014366 containerd[1720]: time="2025-08-13T07:17:48.014318354Z" level=error msg="encountered an error cleaning up failed sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.014572 containerd[1720]: time="2025-08-13T07:17:48.014469959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-rqqwz,Uid:d895fcd6-d479-4f4e-87f8-3b6aee927688,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.014748 kubelet[3215]: E0813 07:17:48.014717 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.014817 kubelet[3215]: E0813 07:17:48.014768 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" Aug 13 07:17:48.014817 kubelet[3215]: E0813 07:17:48.014794 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" Aug 13 07:17:48.015066 kubelet[3215]: E0813 07:17:48.014837 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cdd967ff-rqqwz_calico-apiserver(d895fcd6-d479-4f4e-87f8-3b6aee927688)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cdd967ff-rqqwz_calico-apiserver(d895fcd6-d479-4f4e-87f8-3b6aee927688)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" podUID="d895fcd6-d479-4f4e-87f8-3b6aee927688" Aug 13 07:17:48.019883 containerd[1720]: time="2025-08-13T07:17:48.019518135Z" level=error msg="Failed to destroy network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.019883 containerd[1720]: time="2025-08-13T07:17:48.019869747Z" level=error msg="encountered an error cleaning up failed sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.020143 containerd[1720]: time="2025-08-13T07:17:48.020026552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8g2b7,Uid:0d2537cf-0c17-4fe1-83ab-ece63f331986,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.020340 kubelet[3215]: E0813 07:17:48.020310 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.020420 kubelet[3215]: E0813 07:17:48.020360 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8g2b7" Aug 13 07:17:48.020420 kubelet[3215]: E0813 07:17:48.020384 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8g2b7" Aug 13 07:17:48.020506 kubelet[3215]: E0813 07:17:48.020425 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8g2b7_calico-system(0d2537cf-0c17-4fe1-83ab-ece63f331986)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8g2b7_calico-system(0d2537cf-0c17-4fe1-83ab-ece63f331986)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8g2b7" podUID="0d2537cf-0c17-4fe1-83ab-ece63f331986" Aug 13 07:17:48.024164 containerd[1720]: time="2025-08-13T07:17:48.023982390Z" level=error msg="Failed to destroy network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.024597 containerd[1720]: time="2025-08-13T07:17:48.024472907Z" level=error msg="encountered an error cleaning up failed sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.024597 containerd[1720]: time="2025-08-13T07:17:48.024522708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-672s8,Uid:fd5fa15d-dd4b-47f2-8c06-e769c8807083,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.024882 kubelet[3215]: E0813 07:17:48.024700 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.024882 kubelet[3215]: E0813 07:17:48.024759 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-672s8" Aug 13 07:17:48.024882 kubelet[3215]: E0813 07:17:48.024786 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-672s8" Aug 13 07:17:48.025060 kubelet[3215]: E0813 07:17:48.024826 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-672s8_kube-system(fd5fa15d-dd4b-47f2-8c06-e769c8807083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-672s8_kube-system(fd5fa15d-dd4b-47f2-8c06-e769c8807083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-672s8" podUID="fd5fa15d-dd4b-47f2-8c06-e769c8807083" Aug 13 07:17:48.025709 containerd[1720]: time="2025-08-13T07:17:48.025443840Z" level=error msg="Failed to destroy network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.026406 containerd[1720]: time="2025-08-13T07:17:48.026322171Z" level=error msg="encountered an error cleaning up failed sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.026561 containerd[1720]: time="2025-08-13T07:17:48.026508577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc9959b6b-zhz6g,Uid:069f376f-0179-4789-bdb3-d836086aef24,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.027008 kubelet[3215]: E0813 07:17:48.026854 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.027008 kubelet[3215]: E0813 07:17:48.026906 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dc9959b6b-zhz6g" Aug 13 07:17:48.027008 kubelet[3215]: E0813 07:17:48.026928 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dc9959b6b-zhz6g" Aug 13 07:17:48.027308 kubelet[3215]: E0813 07:17:48.026969 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dc9959b6b-zhz6g_calico-system(069f376f-0179-4789-bdb3-d836086aef24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dc9959b6b-zhz6g_calico-system(069f376f-0179-4789-bdb3-d836086aef24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dc9959b6b-zhz6g" podUID="069f376f-0179-4789-bdb3-d836086aef24" Aug 13 07:17:48.028738 containerd[1720]: time="2025-08-13T07:17:48.028708554Z" level=error msg="Failed to destroy network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.028989 containerd[1720]: time="2025-08-13T07:17:48.028961862Z" level=error msg="encountered an error cleaning up failed sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.029082 containerd[1720]: time="2025-08-13T07:17:48.029010264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966bb757f-8qwrf,Uid:31ff16bd-65fa-4475-be19-58aa527037ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.029325 kubelet[3215]: E0813 07:17:48.029178 3215 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.029325 kubelet[3215]: E0813 07:17:48.029229 3215 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" Aug 13 07:17:48.029325 kubelet[3215]: E0813 07:17:48.029251 3215 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" Aug 13 07:17:48.029456 kubelet[3215]: E0813 07:17:48.029315 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-966bb757f-8qwrf_calico-apiserver(31ff16bd-65fa-4475-be19-58aa527037ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-966bb757f-8qwrf_calico-apiserver(31ff16bd-65fa-4475-be19-58aa527037ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" podUID="31ff16bd-65fa-4475-be19-58aa527037ea" Aug 13 07:17:48.090161 kubelet[3215]: I0813 07:17:48.089783 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:48.145676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a-shm.mount: Deactivated successfully. Aug 13 07:17:48.145788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa-shm.mount: Deactivated successfully. Aug 13 07:17:48.145865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1-shm.mount: Deactivated successfully. Aug 13 07:17:48.145943 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0-shm.mount: Deactivated successfully. Aug 13 07:17:48.393090 kubelet[3215]: I0813 07:17:48.393046 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:17:48.393977 containerd[1720]: time="2025-08-13T07:17:48.393934034Z" level=info msg="StopPodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\"" Aug 13 07:17:48.394433 containerd[1720]: time="2025-08-13T07:17:48.394187843Z" level=info msg="Ensure that sandbox 503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a in task-service has been cleanup successfully" Aug 13 07:17:48.399350 kubelet[3215]: I0813 07:17:48.397680 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:17:48.399496 containerd[1720]: time="2025-08-13T07:17:48.398501293Z" level=info msg="StopPodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\"" Aug 13 07:17:48.399496 containerd[1720]: time="2025-08-13T07:17:48.398746302Z" level=info msg="Ensure that sandbox f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3 in task-service has been cleanup successfully" Aug 13 07:17:48.402794 kubelet[3215]: I0813 07:17:48.402765 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:17:48.404290 containerd[1720]: time="2025-08-13T07:17:48.404250693Z" level=info msg="StopPodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\"" Aug 13 07:17:48.404879 containerd[1720]: time="2025-08-13T07:17:48.404849513Z" level=info msg="Ensure that sandbox 9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a in task-service has been cleanup successfully" Aug 13 07:17:48.409116 containerd[1720]: time="2025-08-13T07:17:48.408770850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:17:48.410268 kubelet[3215]: I0813 07:17:48.409805 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:17:48.413109 containerd[1720]: time="2025-08-13T07:17:48.412653984Z" level=info msg="StopPodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\"" Aug 13 07:17:48.413109 containerd[1720]: time="2025-08-13T07:17:48.412848691Z" level=info msg="Ensure that sandbox a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6 in task-service has been cleanup successfully" Aug 13 07:17:48.424131 kubelet[3215]: I0813 07:17:48.424105 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:17:48.425322 containerd[1720]: time="2025-08-13T07:17:48.425246922Z" level=info msg="StopPodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\"" Aug 13 07:17:48.426293 containerd[1720]: time="2025-08-13T07:17:48.425599334Z" level=info msg="Ensure that sandbox f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c in task-service has been cleanup successfully" Aug 13 07:17:48.432803 kubelet[3215]: I0813 07:17:48.431858 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:17:48.433346 containerd[1720]: time="2025-08-13T07:17:48.433310202Z" level=info msg="StopPodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\"" Aug 13 07:17:48.433572 containerd[1720]: time="2025-08-13T07:17:48.433543710Z" level=info msg="Ensure that sandbox 20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68 in task-service has been cleanup successfully" Aug 13 07:17:48.441281 kubelet[3215]: I0813 07:17:48.440852 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:17:48.442159 containerd[1720]: time="2025-08-13T07:17:48.442127508Z" level=info msg="StopPodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\"" Aug 13 07:17:48.444523 containerd[1720]: time="2025-08-13T07:17:48.444484290Z" level=info msg="Ensure that sandbox d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0 in task-service has been cleanup successfully" Aug 13 07:17:48.463099 kubelet[3215]: I0813 07:17:48.460252 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:17:48.464685 containerd[1720]: time="2025-08-13T07:17:48.464648790Z" level=info msg="StopPodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\"" Aug 13 07:17:48.465013 containerd[1720]: time="2025-08-13T07:17:48.464991402Z" level=info msg="Ensure that sandbox adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa in task-service has been cleanup successfully" Aug 13 07:17:48.475177 kubelet[3215]: I0813 07:17:48.475148 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:17:48.476852 containerd[1720]: time="2025-08-13T07:17:48.476821412Z" level=info msg="StopPodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\"" Aug 13 07:17:48.477859 containerd[1720]: time="2025-08-13T07:17:48.477831347Z" level=info msg="Ensure that sandbox 23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1 in task-service has been cleanup successfully" Aug 13 07:17:48.580520 containerd[1720]: time="2025-08-13T07:17:48.580460811Z" level=error msg="StopPodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" failed" error="failed to destroy network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.580973 kubelet[3215]: E0813 07:17:48.580724 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:17:48.580973 kubelet[3215]: E0813 07:17:48.580800 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0"} Aug 13 07:17:48.580973 kubelet[3215]: E0813 07:17:48.580882 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.580973 kubelet[3215]: E0813 07:17:48.580919 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" podUID="b6d30009-e3c1-496f-8ea4-de2a0c63018b" Aug 13 07:17:48.581721 containerd[1720]: time="2025-08-13T07:17:48.581675753Z" level=error msg="StopPodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" failed" error="failed to destroy network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.582057 kubelet[3215]: E0813 07:17:48.581895 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:17:48.582057 kubelet[3215]: E0813 07:17:48.581943 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a"} Aug 13 07:17:48.582057 kubelet[3215]: E0813 07:17:48.581992 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd5fa15d-dd4b-47f2-8c06-e769c8807083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.582057 kubelet[3215]: E0813 07:17:48.582029 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd5fa15d-dd4b-47f2-8c06-e769c8807083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-672s8" podUID="fd5fa15d-dd4b-47f2-8c06-e769c8807083" Aug 13 07:17:48.590494 containerd[1720]: time="2025-08-13T07:17:48.590454358Z" level=error msg="StopPodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" failed" error="failed to destroy network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.590827 kubelet[3215]: E0813 07:17:48.590667 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:17:48.590827 kubelet[3215]: E0813 07:17:48.590713 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1"} Aug 13 07:17:48.590827 kubelet[3215]: E0813 07:17:48.590753 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bcaff83-98f1-4f1e-9ec2-0de878c93569\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.590827 kubelet[3215]: E0813 07:17:48.590780 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bcaff83-98f1-4f1e-9ec2-0de878c93569\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfzv4" podUID="3bcaff83-98f1-4f1e-9ec2-0de878c93569" Aug 13 07:17:48.599861 containerd[1720]: time="2025-08-13T07:17:48.599747780Z" level=error msg="StopPodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" failed" error="failed to destroy network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.599994 kubelet[3215]: E0813 07:17:48.599951 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:17:48.600079 kubelet[3215]: E0813 07:17:48.600010 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68"} Aug 13 07:17:48.600079 kubelet[3215]: E0813 07:17:48.600048 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"069f376f-0179-4789-bdb3-d836086aef24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.600209 kubelet[3215]: E0813 07:17:48.600095 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"069f376f-0179-4789-bdb3-d836086aef24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dc9959b6b-zhz6g" podUID="069f376f-0179-4789-bdb3-d836086aef24" Aug 13 07:17:48.610699 containerd[1720]: time="2025-08-13T07:17:48.610652859Z" level=error msg="StopPodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" failed" error="failed to destroy network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.611056 kubelet[3215]: E0813 07:17:48.610911 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:17:48.611056 kubelet[3215]: E0813 07:17:48.610962 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3"} Aug 13 07:17:48.611056 kubelet[3215]: E0813 07:17:48.611013 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31ff16bd-65fa-4475-be19-58aa527037ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.611311 kubelet[3215]: E0813 07:17:48.611041 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31ff16bd-65fa-4475-be19-58aa527037ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" podUID="31ff16bd-65fa-4475-be19-58aa527037ea" Aug 13 07:17:48.613176 containerd[1720]: time="2025-08-13T07:17:48.612317917Z" level=error msg="StopPodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" failed" error="failed to destroy network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.613315 kubelet[3215]: E0813 07:17:48.612507 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:17:48.613315 kubelet[3215]: E0813 07:17:48.612548 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c"} Aug 13 07:17:48.613315 kubelet[3215]: E0813 07:17:48.612584 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.613315 kubelet[3215]: E0813 07:17:48.612613 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dea07cd-503b-45c7-8ebe-51b022e30cd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kngq7" podUID="6dea07cd-503b-45c7-8ebe-51b022e30cd4" Aug 13 07:17:48.616266 containerd[1720]: time="2025-08-13T07:17:48.616168151Z" level=error msg="StopPodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" failed" error="failed to destroy network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.616665 kubelet[3215]: E0813 07:17:48.616529 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:17:48.616665 kubelet[3215]: E0813 07:17:48.616585 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a"} Aug 13 07:17:48.616665 kubelet[3215]: E0813 07:17:48.616617 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d2537cf-0c17-4fe1-83ab-ece63f331986\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.616665 kubelet[3215]: E0813 07:17:48.616641 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d2537cf-0c17-4fe1-83ab-ece63f331986\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8g2b7" podUID="0d2537cf-0c17-4fe1-83ab-ece63f331986" Aug 13 07:17:48.625397 containerd[1720]: time="2025-08-13T07:17:48.625355269Z" level=error msg="StopPodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" failed" error="failed to destroy network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.625583 kubelet[3215]: E0813 07:17:48.625549 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:17:48.625694 kubelet[3215]: E0813 07:17:48.625597 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6"} Aug 13 07:17:48.625694 kubelet[3215]: E0813 07:17:48.625636 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d895fcd6-d479-4f4e-87f8-3b6aee927688\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.625694 kubelet[3215]: E0813 07:17:48.625664 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d895fcd6-d479-4f4e-87f8-3b6aee927688\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" podUID="d895fcd6-d479-4f4e-87f8-3b6aee927688" Aug 13 07:17:48.631852 containerd[1720]: time="2025-08-13T07:17:48.631800193Z" level=error msg="StopPodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" failed" error="failed to destroy network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:17:48.632045 kubelet[3215]: E0813 07:17:48.632012 3215 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:17:48.632145 kubelet[3215]: E0813 07:17:48.632059 3215 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa"} Aug 13 07:17:48.632145 kubelet[3215]: E0813 07:17:48.632097 3215 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ab325b6-c552-42c8-a448-2c9835fe41c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:17:48.632145 kubelet[3215]: E0813 07:17:48.632125 3215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ab325b6-c552-42c8-a448-2c9835fe41c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" podUID="5ab325b6-c552-42c8-a448-2c9835fe41c3" Aug 13 07:17:54.982543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303528400.mount: Deactivated successfully. Aug 13 07:17:55.020947 containerd[1720]: time="2025-08-13T07:17:55.020893485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:55.023353 containerd[1720]: time="2025-08-13T07:17:55.023213560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:17:55.026291 containerd[1720]: time="2025-08-13T07:17:55.025723742Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:55.031587 containerd[1720]: time="2025-08-13T07:17:55.031531432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:55.032241 containerd[1720]: time="2025-08-13T07:17:55.032082050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.622828184s" Aug 13 07:17:55.032241 containerd[1720]: time="2025-08-13T07:17:55.032123352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:17:55.049836 containerd[1720]: time="2025-08-13T07:17:55.049744628Z" level=info msg="CreateContainer within sandbox \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:17:55.096176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730493874.mount: Deactivated successfully. Aug 13 07:17:55.101407 containerd[1720]: time="2025-08-13T07:17:55.101361815Z" level=info msg="CreateContainer within sandbox \"3cdef8bffec8adc263771719bb8bd4f0c03e81bcf6e2966df95bceb6db93cd90\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120\"" Aug 13 07:17:55.103228 containerd[1720]: time="2025-08-13T07:17:55.103193775Z" level=info msg="StartContainer for \"6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120\"" Aug 13 07:17:55.134464 systemd[1]: Started cri-containerd-6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120.scope - libcontainer container 6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120. Aug 13 07:17:55.175059 containerd[1720]: time="2025-08-13T07:17:55.175011723Z" level=info msg="StartContainer for \"6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120\" returns successfully" Aug 13 07:17:55.514953 kubelet[3215]: I0813 07:17:55.514881 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-llv6f" podStartSLOduration=1.148426524 podStartE2EDuration="22.514858032s" podCreationTimestamp="2025-08-13 07:17:33 +0000 UTC" firstStartedPulling="2025-08-13 07:17:33.666611074 +0000 UTC m=+19.519465026" lastFinishedPulling="2025-08-13 07:17:55.033042582 +0000 UTC m=+40.885896534" observedRunningTime="2025-08-13 07:17:55.512619559 +0000 UTC m=+41.365473511" watchObservedRunningTime="2025-08-13 07:17:55.514858032 +0000 UTC m=+41.367711884" Aug 13 07:17:55.709720 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:17:55.709828 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:17:55.840956 containerd[1720]: time="2025-08-13T07:17:55.840903290Z" level=info msg="StopPodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\"" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.934 [INFO][4477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.934 [INFO][4477] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" iface="eth0" netns="/var/run/netns/cni-59fc3cf3-928c-26ad-1b4c-5f3c5aafa51a" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.934 [INFO][4477] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" iface="eth0" netns="/var/run/netns/cni-59fc3cf3-928c-26ad-1b4c-5f3c5aafa51a" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.935 [INFO][4477] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" iface="eth0" netns="/var/run/netns/cni-59fc3cf3-928c-26ad-1b4c-5f3c5aafa51a" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.935 [INFO][4477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.935 [INFO][4477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.996 [INFO][4488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.996 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:55.996 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:56.006 [WARNING][4488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:56.006 [INFO][4488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:56.008 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:56.016841 containerd[1720]: 2025-08-13 07:17:56.013 [INFO][4477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:17:56.019507 containerd[1720]: time="2025-08-13T07:17:56.019342623Z" level=info msg="TearDown network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" successfully" Aug 13 07:17:56.019507 containerd[1720]: time="2025-08-13T07:17:56.019379724Z" level=info msg="StopPodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" returns successfully" Aug 13 07:17:56.022241 systemd[1]: run-netns-cni\x2d59fc3cf3\x2d928c\x2d26ad\x2d1b4c\x2d5f3c5aafa51a.mount: Deactivated successfully. Aug 13 07:17:56.084282 kubelet[3215]: I0813 07:17:56.082758 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/069f376f-0179-4789-bdb3-d836086aef24-whisker-backend-key-pair\") pod \"069f376f-0179-4789-bdb3-d836086aef24\" (UID: \"069f376f-0179-4789-bdb3-d836086aef24\") " Aug 13 07:17:56.084282 kubelet[3215]: I0813 07:17:56.083576 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/069f376f-0179-4789-bdb3-d836086aef24-whisker-ca-bundle\") pod \"069f376f-0179-4789-bdb3-d836086aef24\" (UID: \"069f376f-0179-4789-bdb3-d836086aef24\") " Aug 13 07:17:56.084282 kubelet[3215]: I0813 07:17:56.083619 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj9gs\" (UniqueName: \"kubernetes.io/projected/069f376f-0179-4789-bdb3-d836086aef24-kube-api-access-sj9gs\") pod \"069f376f-0179-4789-bdb3-d836086aef24\" (UID: \"069f376f-0179-4789-bdb3-d836086aef24\") " Aug 13 07:17:56.091129 systemd[1]: var-lib-kubelet-pods-069f376f\x2d0179\x2d4789\x2dbdb3\x2dd836086aef24-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:17:56.095910 kubelet[3215]: I0813 07:17:56.092103 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069f376f-0179-4789-bdb3-d836086aef24-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "069f376f-0179-4789-bdb3-d836086aef24" (UID: "069f376f-0179-4789-bdb3-d836086aef24"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:17:56.095910 kubelet[3215]: I0813 07:17:56.092502 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/069f376f-0179-4789-bdb3-d836086aef24-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "069f376f-0179-4789-bdb3-d836086aef24" (UID: "069f376f-0179-4789-bdb3-d836086aef24"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 07:17:56.095910 kubelet[3215]: I0813 07:17:56.094435 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069f376f-0179-4789-bdb3-d836086aef24-kube-api-access-sj9gs" (OuterVolumeSpecName: "kube-api-access-sj9gs") pod "069f376f-0179-4789-bdb3-d836086aef24" (UID: "069f376f-0179-4789-bdb3-d836086aef24"). InnerVolumeSpecName "kube-api-access-sj9gs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:17:56.098058 systemd[1]: var-lib-kubelet-pods-069f376f\x2d0179\x2d4789\x2dbdb3\x2dd836086aef24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj9gs.mount: Deactivated successfully. Aug 13 07:17:56.184420 kubelet[3215]: I0813 07:17:56.184364 3215 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/069f376f-0179-4789-bdb3-d836086aef24-whisker-backend-key-pair\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:17:56.184572 kubelet[3215]: I0813 07:17:56.184432 3215 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/069f376f-0179-4789-bdb3-d836086aef24-whisker-ca-bundle\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:17:56.184572 kubelet[3215]: I0813 07:17:56.184447 3215 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sj9gs\" (UniqueName: \"kubernetes.io/projected/069f376f-0179-4789-bdb3-d836086aef24-kube-api-access-sj9gs\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:17:56.263408 systemd[1]: Removed slice kubepods-besteffort-pod069f376f_0179_4789_bdb3_d836086aef24.slice - libcontainer container kubepods-besteffort-pod069f376f_0179_4789_bdb3_d836086aef24.slice. Aug 13 07:17:56.584986 systemd[1]: Created slice kubepods-besteffort-pod34c75508_8c63_49e1_a4b2_8009fd0fa230.slice - libcontainer container kubepods-besteffort-pod34c75508_8c63_49e1_a4b2_8009fd0fa230.slice. Aug 13 07:17:56.688481 kubelet[3215]: I0813 07:17:56.688411 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wtb\" (UniqueName: \"kubernetes.io/projected/34c75508-8c63-49e1-a4b2-8009fd0fa230-kube-api-access-l6wtb\") pod \"whisker-5d54fbbfdb-qsfwk\" (UID: \"34c75508-8c63-49e1-a4b2-8009fd0fa230\") " pod="calico-system/whisker-5d54fbbfdb-qsfwk" Aug 13 07:17:56.688481 kubelet[3215]: I0813 07:17:56.688471 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34c75508-8c63-49e1-a4b2-8009fd0fa230-whisker-ca-bundle\") pod \"whisker-5d54fbbfdb-qsfwk\" (UID: \"34c75508-8c63-49e1-a4b2-8009fd0fa230\") " pod="calico-system/whisker-5d54fbbfdb-qsfwk" Aug 13 07:17:56.688944 kubelet[3215]: I0813 07:17:56.688521 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34c75508-8c63-49e1-a4b2-8009fd0fa230-whisker-backend-key-pair\") pod \"whisker-5d54fbbfdb-qsfwk\" (UID: \"34c75508-8c63-49e1-a4b2-8009fd0fa230\") " pod="calico-system/whisker-5d54fbbfdb-qsfwk" Aug 13 07:17:56.889937 containerd[1720]: time="2025-08-13T07:17:56.889209612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d54fbbfdb-qsfwk,Uid:34c75508-8c63-49e1-a4b2-8009fd0fa230,Namespace:calico-system,Attempt:0,}" Aug 13 07:17:57.041820 systemd-networkd[1578]: cali77866d38ee9: Link UP Aug 13 07:17:57.043093 systemd-networkd[1578]: cali77866d38ee9: Gained carrier Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.941 [INFO][4511] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.951 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0 whisker-5d54fbbfdb- calico-system 34c75508-8c63-49e1-a4b2-8009fd0fa230 919 0 2025-08-13 07:17:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d54fbbfdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 whisker-5d54fbbfdb-qsfwk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali77866d38ee9 [] [] }} ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.952 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.979 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" HandleID="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.979 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" HandleID="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"whisker-5d54fbbfdb-qsfwk", "timestamp":"2025-08-13 07:17:56.979385506 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.979 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.979 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.979 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:56.995 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.003 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.008 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.010 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.012 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.012 [INFO][4523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.013 [INFO][4523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.018 [INFO][4523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.027 [INFO][4523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.129/26] block=192.168.91.128/26 handle="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.027 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.129/26] handle="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.027 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:57.065962 containerd[1720]: 2025-08-13 07:17:57.027 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.129/26] IPv6=[] ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" HandleID="k8s-pod-network.ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.067422 containerd[1720]: 2025-08-13 07:17:57.029 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0", GenerateName:"whisker-5d54fbbfdb-", Namespace:"calico-system", SelfLink:"", UID:"34c75508-8c63-49e1-a4b2-8009fd0fa230", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d54fbbfdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"whisker-5d54fbbfdb-qsfwk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77866d38ee9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:57.067422 containerd[1720]: 2025-08-13 07:17:57.029 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.129/32] ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.067422 containerd[1720]: 2025-08-13 07:17:57.029 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77866d38ee9 ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.067422 containerd[1720]: 2025-08-13 07:17:57.042 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.067422 containerd[1720]: 2025-08-13 07:17:57.044 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0", GenerateName:"whisker-5d54fbbfdb-", Namespace:"calico-system", SelfLink:"", UID:"34c75508-8c63-49e1-a4b2-8009fd0fa230", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d54fbbfdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae", Pod:"whisker-5d54fbbfdb-qsfwk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77866d38ee9", MAC:"c2:60:2f:fe:7a:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:57.067422 containerd[1720]: 2025-08-13 07:17:57.061 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae" Namespace="calico-system" Pod="whisker-5d54fbbfdb-qsfwk" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--5d54fbbfdb--qsfwk-eth0" Aug 13 07:17:57.086853 containerd[1720]: time="2025-08-13T07:17:57.086371478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:57.086853 containerd[1720]: time="2025-08-13T07:17:57.086435880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:57.086853 containerd[1720]: time="2025-08-13T07:17:57.086452580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:57.086853 containerd[1720]: time="2025-08-13T07:17:57.086586185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:57.120862 systemd[1]: run-containerd-runc-k8s.io-ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae-runc.D1Cu0j.mount: Deactivated successfully. Aug 13 07:17:57.130421 systemd[1]: Started cri-containerd-ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae.scope - libcontainer container ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae. Aug 13 07:17:57.171710 containerd[1720]: time="2025-08-13T07:17:57.171668905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d54fbbfdb-qsfwk,Uid:34c75508-8c63-49e1-a4b2-8009fd0fa230,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae\"" Aug 13 07:17:57.174193 containerd[1720]: time="2025-08-13T07:17:57.174158790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:17:57.741356 kernel: bpftool[4700]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:17:58.067508 systemd-networkd[1578]: vxlan.calico: Link UP Aug 13 07:17:58.067519 systemd-networkd[1578]: vxlan.calico: Gained carrier Aug 13 07:17:58.253470 systemd-networkd[1578]: cali77866d38ee9: Gained IPv6LL Aug 13 07:17:58.258969 kubelet[3215]: I0813 07:17:58.258694 3215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069f376f-0179-4789-bdb3-d836086aef24" path="/var/lib/kubelet/pods/069f376f-0179-4789-bdb3-d836086aef24/volumes" Aug 13 07:17:58.360752 kubelet[3215]: I0813 07:17:58.358789 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:17:58.437497 systemd[1]: run-containerd-runc-k8s.io-6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120-runc.UFsE7t.mount: Deactivated successfully. Aug 13 07:17:58.607059 systemd[1]: run-containerd-runc-k8s.io-6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120-runc.6YHS3K.mount: Deactivated successfully. Aug 13 07:17:58.624530 containerd[1720]: time="2025-08-13T07:17:58.624253164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:58.630271 containerd[1720]: time="2025-08-13T07:17:58.630200609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:17:58.634354 containerd[1720]: time="2025-08-13T07:17:58.634316810Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:58.641336 containerd[1720]: time="2025-08-13T07:17:58.641299581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:17:58.645278 containerd[1720]: time="2025-08-13T07:17:58.643150626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.468950635s" Aug 13 07:17:58.645278 containerd[1720]: time="2025-08-13T07:17:58.643190927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:17:58.649485 containerd[1720]: time="2025-08-13T07:17:58.649448880Z" level=info msg="CreateContainer within sandbox \"ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:17:58.703498 containerd[1720]: time="2025-08-13T07:17:58.703446202Z" level=info msg="CreateContainer within sandbox \"ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"66cb9a33ffcf3bb07ccebc54f54396930469ef1308b44112a34b756823408bcb\"" Aug 13 07:17:58.705140 containerd[1720]: time="2025-08-13T07:17:58.705105843Z" level=info msg="StartContainer for \"66cb9a33ffcf3bb07ccebc54f54396930469ef1308b44112a34b756823408bcb\"" Aug 13 07:17:58.741767 systemd[1]: Started cri-containerd-66cb9a33ffcf3bb07ccebc54f54396930469ef1308b44112a34b756823408bcb.scope - libcontainer container 66cb9a33ffcf3bb07ccebc54f54396930469ef1308b44112a34b756823408bcb. Aug 13 07:17:58.823633 containerd[1720]: time="2025-08-13T07:17:58.823594042Z" level=info msg="StartContainer for \"66cb9a33ffcf3bb07ccebc54f54396930469ef1308b44112a34b756823408bcb\" returns successfully" Aug 13 07:17:58.824965 containerd[1720]: time="2025-08-13T07:17:58.824882874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:17:59.255815 containerd[1720]: time="2025-08-13T07:17:59.255632616Z" level=info msg="StopPodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\"" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.312 [INFO][4867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.312 [INFO][4867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" iface="eth0" netns="/var/run/netns/cni-4cc5ce10-e5b0-b0a3-e160-b3cee9027ce7" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.313 [INFO][4867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" iface="eth0" netns="/var/run/netns/cni-4cc5ce10-e5b0-b0a3-e160-b3cee9027ce7" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.315 [INFO][4867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" iface="eth0" netns="/var/run/netns/cni-4cc5ce10-e5b0-b0a3-e160-b3cee9027ce7" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.316 [INFO][4867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.316 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.339 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.339 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.339 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.347 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.347 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.349 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:59.351837 containerd[1720]: 2025-08-13 07:17:59.350 [INFO][4867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:17:59.352483 containerd[1720]: time="2025-08-13T07:17:59.352036775Z" level=info msg="TearDown network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" successfully" Aug 13 07:17:59.352483 containerd[1720]: time="2025-08-13T07:17:59.352084276Z" level=info msg="StopPodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" returns successfully" Aug 13 07:17:59.353182 containerd[1720]: time="2025-08-13T07:17:59.353144602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-672s8,Uid:fd5fa15d-dd4b-47f2-8c06-e769c8807083,Namespace:kube-system,Attempt:1,}" Aug 13 07:17:59.417048 systemd[1]: run-netns-cni\x2d4cc5ce10\x2de5b0\x2db0a3\x2de160\x2db3cee9027ce7.mount: Deactivated successfully. Aug 13 07:17:59.516729 systemd-networkd[1578]: cali842127915f3: Link UP Aug 13 07:17:59.519924 systemd-networkd[1578]: cali842127915f3: Gained carrier Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.440 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0 coredns-668d6bf9bc- kube-system fd5fa15d-dd4b-47f2-8c06-e769c8807083 936 0 2025-08-13 07:17:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 coredns-668d6bf9bc-672s8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali842127915f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.440 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.472 [INFO][4895] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" HandleID="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.473 [INFO][4895] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" HandleID="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bf610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"coredns-668d6bf9bc-672s8", "timestamp":"2025-08-13 07:17:59.472902433 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.473 [INFO][4895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.473 [INFO][4895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.473 [INFO][4895] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.481 [INFO][4895] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.487 [INFO][4895] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.491 [INFO][4895] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.493 [INFO][4895] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.495 [INFO][4895] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.495 [INFO][4895] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.497 [INFO][4895] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461 Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.505 [INFO][4895] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.511 [INFO][4895] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.130/26] block=192.168.91.128/26 handle="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.511 [INFO][4895] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.130/26] handle="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.512 [INFO][4895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:17:59.547645 containerd[1720]: 2025-08-13 07:17:59.512 [INFO][4895] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.130/26] IPv6=[] ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" HandleID="k8s-pod-network.090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.550055 containerd[1720]: 2025-08-13 07:17:59.513 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fd5fa15d-dd4b-47f2-8c06-e769c8807083", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"coredns-668d6bf9bc-672s8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali842127915f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:59.550055 containerd[1720]: 2025-08-13 07:17:59.514 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.130/32] ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.550055 containerd[1720]: 2025-08-13 07:17:59.514 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali842127915f3 ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.550055 containerd[1720]: 2025-08-13 07:17:59.519 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.550055 containerd[1720]: 2025-08-13 07:17:59.519 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fd5fa15d-dd4b-47f2-8c06-e769c8807083", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461", Pod:"coredns-668d6bf9bc-672s8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali842127915f3", MAC:"66:4b:5a:d6:db:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:17:59.550055 containerd[1720]: 2025-08-13 07:17:59.544 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461" Namespace="kube-system" Pod="coredns-668d6bf9bc-672s8" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:17:59.586563 containerd[1720]: time="2025-08-13T07:17:59.586388211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:17:59.587747 containerd[1720]: time="2025-08-13T07:17:59.587688042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:17:59.588047 containerd[1720]: time="2025-08-13T07:17:59.587895547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.588227 containerd[1720]: time="2025-08-13T07:17:59.588028051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:17:59.628826 systemd[1]: Started cri-containerd-090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461.scope - libcontainer container 090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461. Aug 13 07:17:59.661116 systemd-networkd[1578]: vxlan.calico: Gained IPv6LL Aug 13 07:17:59.683783 containerd[1720]: time="2025-08-13T07:17:59.683711792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-672s8,Uid:fd5fa15d-dd4b-47f2-8c06-e769c8807083,Namespace:kube-system,Attempt:1,} returns sandbox id \"090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461\"" Aug 13 07:17:59.688320 containerd[1720]: time="2025-08-13T07:17:59.688075499Z" level=info msg="CreateContainer within sandbox \"090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:17:59.737087 containerd[1720]: time="2025-08-13T07:17:59.737019697Z" level=info msg="CreateContainer within sandbox \"090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0acb0a63f25b60b603e71f076a9d468376e37f085772cff1f669f5769fac8fea\"" Aug 13 07:17:59.740200 containerd[1720]: time="2025-08-13T07:17:59.739916468Z" level=info msg="StartContainer for \"0acb0a63f25b60b603e71f076a9d468376e37f085772cff1f669f5769fac8fea\"" Aug 13 07:17:59.769450 systemd[1]: Started cri-containerd-0acb0a63f25b60b603e71f076a9d468376e37f085772cff1f669f5769fac8fea.scope - libcontainer container 0acb0a63f25b60b603e71f076a9d468376e37f085772cff1f669f5769fac8fea. Aug 13 07:17:59.801243 containerd[1720]: time="2025-08-13T07:17:59.801111766Z" level=info msg="StartContainer for \"0acb0a63f25b60b603e71f076a9d468376e37f085772cff1f669f5769fac8fea\" returns successfully" Aug 13 07:18:00.254438 containerd[1720]: time="2025-08-13T07:18:00.254252456Z" level=info msg="StopPodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\"" Aug 13 07:18:00.256361 containerd[1720]: time="2025-08-13T07:18:00.255384983Z" level=info msg="StopPodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\"" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.349 [INFO][5003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.349 [INFO][5003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" iface="eth0" netns="/var/run/netns/cni-a15d9906-e5dc-5c81-7dd0-ebd5d5e15471" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.350 [INFO][5003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" iface="eth0" netns="/var/run/netns/cni-a15d9906-e5dc-5c81-7dd0-ebd5d5e15471" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.350 [INFO][5003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" iface="eth0" netns="/var/run/netns/cni-a15d9906-e5dc-5c81-7dd0-ebd5d5e15471" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.350 [INFO][5003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.350 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.381 [INFO][5024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.381 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.382 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.388 [WARNING][5024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.388 [INFO][5024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.389 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:00.399995 containerd[1720]: 2025-08-13 07:18:00.393 [INFO][5003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:00.401123 containerd[1720]: time="2025-08-13T07:18:00.400067124Z" level=info msg="TearDown network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" successfully" Aug 13 07:18:00.401123 containerd[1720]: time="2025-08-13T07:18:00.400202227Z" level=info msg="StopPodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" returns successfully" Aug 13 07:18:00.405438 containerd[1720]: time="2025-08-13T07:18:00.403681713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966bb757f-8qwrf,Uid:31ff16bd-65fa-4475-be19-58aa527037ea,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.342 [INFO][5010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.343 [INFO][5010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" iface="eth0" netns="/var/run/netns/cni-239d0a0e-0870-b089-2d6b-e07aae76ae52" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.343 [INFO][5010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" iface="eth0" netns="/var/run/netns/cni-239d0a0e-0870-b089-2d6b-e07aae76ae52" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.345 [INFO][5010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" iface="eth0" netns="/var/run/netns/cni-239d0a0e-0870-b089-2d6b-e07aae76ae52" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.345 [INFO][5010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.345 [INFO][5010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.381 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.381 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.389 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.397 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.397 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.401 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:00.412666 containerd[1720]: 2025-08-13 07:18:00.405 [INFO][5010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:00.413452 containerd[1720]: time="2025-08-13T07:18:00.413402750Z" level=info msg="TearDown network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" successfully" Aug 13 07:18:00.413562 containerd[1720]: time="2025-08-13T07:18:00.413543754Z" level=info msg="StopPodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" returns successfully" Aug 13 07:18:00.415291 containerd[1720]: time="2025-08-13T07:18:00.414977389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-rqqwz,Uid:d895fcd6-d479-4f4e-87f8-3b6aee927688,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:00.417184 systemd[1]: run-netns-cni\x2da15d9906\x2de5dc\x2d5c81\x2d7dd0\x2debd5d5e15471.mount: Deactivated successfully. Aug 13 07:18:00.425244 systemd[1]: run-netns-cni\x2d239d0a0e\x2d0870\x2db089\x2d2d6b\x2de07aae76ae52.mount: Deactivated successfully. Aug 13 07:18:00.555747 kubelet[3215]: I0813 07:18:00.554102 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-672s8" podStartSLOduration=42.554080293 podStartE2EDuration="42.554080293s" podCreationTimestamp="2025-08-13 07:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:00.553731385 +0000 UTC m=+46.406585237" watchObservedRunningTime="2025-08-13 07:18:00.554080293 +0000 UTC m=+46.406934145" Aug 13 07:18:00.765949 systemd-networkd[1578]: calid0eb99a19f7: Link UP Aug 13 07:18:00.767612 systemd-networkd[1578]: calid0eb99a19f7: Gained carrier Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.561 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0 calico-apiserver-5cdd967ff- calico-apiserver d895fcd6-d479-4f4e-87f8-3b6aee927688 947 0 2025-08-13 07:17:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cdd967ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 calico-apiserver-5cdd967ff-rqqwz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0eb99a19f7 [] [] }} ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.561 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.671 [INFO][5059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.672 [INFO][5059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026daf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"calico-apiserver-5cdd967ff-rqqwz", "timestamp":"2025-08-13 07:18:00.671480967 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.672 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.673 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.673 [INFO][5059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.691 [INFO][5059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.704 [INFO][5059] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.719 [INFO][5059] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.722 [INFO][5059] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.726 [INFO][5059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.726 [INFO][5059] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.728 [INFO][5059] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.738 [INFO][5059] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.752 [INFO][5059] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.131/26] block=192.168.91.128/26 handle="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.752 [INFO][5059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.131/26] handle="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.752 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:00.806582 containerd[1720]: 2025-08-13 07:18:00.752 [INFO][5059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.131/26] IPv6=[] ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.808740 containerd[1720]: 2025-08-13 07:18:00.757 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"d895fcd6-d479-4f4e-87f8-3b6aee927688", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"calico-apiserver-5cdd967ff-rqqwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0eb99a19f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:00.808740 containerd[1720]: 2025-08-13 07:18:00.757 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.131/32] ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.808740 containerd[1720]: 2025-08-13 07:18:00.758 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0eb99a19f7 ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.808740 containerd[1720]: 2025-08-13 07:18:00.769 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.808740 containerd[1720]: 2025-08-13 07:18:00.769 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"d895fcd6-d479-4f4e-87f8-3b6aee927688", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c", Pod:"calico-apiserver-5cdd967ff-rqqwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0eb99a19f7", MAC:"0a:c1:75:b1:ab:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:00.808740 containerd[1720]: 2025-08-13 07:18:00.797 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-rqqwz" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:00.867986 containerd[1720]: time="2025-08-13T07:18:00.866551476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:00.867986 containerd[1720]: time="2025-08-13T07:18:00.866616878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:00.867986 containerd[1720]: time="2025-08-13T07:18:00.866657979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:00.867986 containerd[1720]: time="2025-08-13T07:18:00.866772783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:00.892613 systemd-networkd[1578]: cali30b0537660c: Link UP Aug 13 07:18:00.896836 systemd-networkd[1578]: cali30b0537660c: Gained carrier Aug 13 07:18:00.931383 systemd[1]: Started cri-containerd-d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c.scope - libcontainer container d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c. Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.613 [INFO][5041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0 calico-apiserver-966bb757f- calico-apiserver 31ff16bd-65fa-4475-be19-58aa527037ea 948 0 2025-08-13 07:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:966bb757f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 calico-apiserver-966bb757f-8qwrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali30b0537660c [] [] }} ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.618 [INFO][5041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.699 [INFO][5067] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" HandleID="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.700 [INFO][5067] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" HandleID="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"calico-apiserver-966bb757f-8qwrf", "timestamp":"2025-08-13 07:18:00.699563154 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.700 [INFO][5067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.752 [INFO][5067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.752 [INFO][5067] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.804 [INFO][5067] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.816 [INFO][5067] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.825 [INFO][5067] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.830 [INFO][5067] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.833 [INFO][5067] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.833 [INFO][5067] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.836 [INFO][5067] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.846 [INFO][5067] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.867 [INFO][5067] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.132/26] block=192.168.91.128/26 handle="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.867 [INFO][5067] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.132/26] handle="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.867 [INFO][5067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:00.934438 containerd[1720]: 2025-08-13 07:18:00.868 [INFO][5067] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.132/26] IPv6=[] ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" HandleID="k8s-pod-network.511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.935720 containerd[1720]: 2025-08-13 07:18:00.884 [INFO][5041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0", GenerateName:"calico-apiserver-966bb757f-", Namespace:"calico-apiserver", SelfLink:"", UID:"31ff16bd-65fa-4475-be19-58aa527037ea", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966bb757f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"calico-apiserver-966bb757f-8qwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30b0537660c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:00.935720 containerd[1720]: 2025-08-13 07:18:00.884 [INFO][5041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.132/32] ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.935720 containerd[1720]: 2025-08-13 07:18:00.885 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30b0537660c ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.935720 containerd[1720]: 2025-08-13 07:18:00.900 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:00.935720 containerd[1720]: 2025-08-13 07:18:00.903 [INFO][5041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0", GenerateName:"calico-apiserver-966bb757f-", Namespace:"calico-apiserver", SelfLink:"", UID:"31ff16bd-65fa-4475-be19-58aa527037ea", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966bb757f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f", Pod:"calico-apiserver-966bb757f-8qwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30b0537660c", MAC:"e2:48:d5:d3:3f:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:00.935720 containerd[1720]: 2025-08-13 07:18:00.929 [INFO][5041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-8qwrf" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:01.005394 systemd-networkd[1578]: cali842127915f3: Gained IPv6LL Aug 13 07:18:01.025737 containerd[1720]: time="2025-08-13T07:18:01.025229267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-rqqwz,Uid:d895fcd6-d479-4f4e-87f8-3b6aee927688,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\"" Aug 13 07:18:01.167899 containerd[1720]: time="2025-08-13T07:18:01.167393298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:01.167899 containerd[1720]: time="2025-08-13T07:18:01.167469000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:01.167899 containerd[1720]: time="2025-08-13T07:18:01.167505702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:01.167899 containerd[1720]: time="2025-08-13T07:18:01.167600005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:01.207859 systemd[1]: Started cri-containerd-511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f.scope - libcontainer container 511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f. Aug 13 07:18:01.258771 containerd[1720]: time="2025-08-13T07:18:01.258724301Z" level=info msg="StopPodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\"" Aug 13 07:18:01.262141 containerd[1720]: time="2025-08-13T07:18:01.262098416Z" level=info msg="StopPodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\"" Aug 13 07:18:01.331642 containerd[1720]: time="2025-08-13T07:18:01.331539975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966bb757f-8qwrf,Uid:31ff16bd-65fa-4475-be19-58aa527037ea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f\"" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.478 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.478 [INFO][5202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" iface="eth0" netns="/var/run/netns/cni-9f07fbfd-8e04-2687-d689-2e8b903177ea" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.479 [INFO][5202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" iface="eth0" netns="/var/run/netns/cni-9f07fbfd-8e04-2687-d689-2e8b903177ea" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.480 [INFO][5202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" iface="eth0" netns="/var/run/netns/cni-9f07fbfd-8e04-2687-d689-2e8b903177ea" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.480 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.480 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.582 [INFO][5213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.582 [INFO][5213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.582 [INFO][5213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.600 [WARNING][5213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.600 [INFO][5213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.605 [INFO][5213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:01.609562 containerd[1720]: 2025-08-13 07:18:01.607 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:01.612170 containerd[1720]: time="2025-08-13T07:18:01.611818299Z" level=info msg="TearDown network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" successfully" Aug 13 07:18:01.612170 containerd[1720]: time="2025-08-13T07:18:01.611852000Z" level=info msg="StopPodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" returns successfully" Aug 13 07:18:01.619706 containerd[1720]: time="2025-08-13T07:18:01.617358487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfzv4,Uid:3bcaff83-98f1-4f1e-9ec2-0de878c93569,Namespace:kube-system,Attempt:1,}" Aug 13 07:18:01.621489 systemd[1]: run-netns-cni\x2d9f07fbfd\x2d8e04\x2d2687\x2dd689\x2d2e8b903177ea.mount: Deactivated successfully. Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.471 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.473 [INFO][5190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" iface="eth0" netns="/var/run/netns/cni-70c1bd0c-039e-4342-5a23-32ccd144113b" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.474 [INFO][5190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" iface="eth0" netns="/var/run/netns/cni-70c1bd0c-039e-4342-5a23-32ccd144113b" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.475 [INFO][5190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" iface="eth0" netns="/var/run/netns/cni-70c1bd0c-039e-4342-5a23-32ccd144113b" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.475 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.475 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.592 [INFO][5211] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.592 [INFO][5211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.605 [INFO][5211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.626 [WARNING][5211] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.626 [INFO][5211] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.629 [INFO][5211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:01.634102 containerd[1720]: 2025-08-13 07:18:01.632 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:01.636448 containerd[1720]: time="2025-08-13T07:18:01.634608373Z" level=info msg="TearDown network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" successfully" Aug 13 07:18:01.636448 containerd[1720]: time="2025-08-13T07:18:01.634641874Z" level=info msg="StopPodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" returns successfully" Aug 13 07:18:01.639865 containerd[1720]: time="2025-08-13T07:18:01.639819150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8g2b7,Uid:0d2537cf-0c17-4fe1-83ab-ece63f331986,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:01.640820 systemd[1]: run-netns-cni\x2d70c1bd0c\x2d039e\x2d4342\x2d5a23\x2d32ccd144113b.mount: Deactivated successfully. Aug 13 07:18:01.909099 systemd-networkd[1578]: calif9ef0e554d9: Link UP Aug 13 07:18:01.912199 systemd-networkd[1578]: calif9ef0e554d9: Gained carrier Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.778 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0 coredns-668d6bf9bc- kube-system 3bcaff83-98f1-4f1e-9ec2-0de878c93569 971 0 2025-08-13 07:17:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 coredns-668d6bf9bc-dfzv4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9ef0e554d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.778 [INFO][5224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.838 [INFO][5248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" HandleID="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.839 [INFO][5248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" HandleID="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041fb40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"coredns-668d6bf9bc-dfzv4", "timestamp":"2025-08-13 07:18:01.838574504 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.839 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.839 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.839 [INFO][5248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.850 [INFO][5248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.857 [INFO][5248] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.863 [INFO][5248] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.869 [INFO][5248] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.874 [INFO][5248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.874 [INFO][5248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.877 [INFO][5248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10 Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.882 [INFO][5248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.896 [INFO][5248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.133/26] block=192.168.91.128/26 handle="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.896 [INFO][5248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.133/26] handle="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.896 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:01.944143 containerd[1720]: 2025-08-13 07:18:01.897 [INFO][5248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.133/26] IPv6=[] ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" HandleID="k8s-pod-network.59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.945457 containerd[1720]: 2025-08-13 07:18:01.902 [INFO][5224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bcaff83-98f1-4f1e-9ec2-0de878c93569", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"coredns-668d6bf9bc-dfzv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9ef0e554d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:01.945457 containerd[1720]: 2025-08-13 07:18:01.902 [INFO][5224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.133/32] ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.945457 containerd[1720]: 2025-08-13 07:18:01.902 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9ef0e554d9 ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.945457 containerd[1720]: 2025-08-13 07:18:01.917 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:01.945457 containerd[1720]: 2025-08-13 07:18:01.918 [INFO][5224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bcaff83-98f1-4f1e-9ec2-0de878c93569", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10", Pod:"coredns-668d6bf9bc-dfzv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9ef0e554d9", MAC:"e2:76:f5:73:b8:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:01.945457 containerd[1720]: 2025-08-13 07:18:01.940 [INFO][5224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfzv4" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:02.006526 containerd[1720]: time="2025-08-13T07:18:02.005465275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:02.006526 containerd[1720]: time="2025-08-13T07:18:02.005538277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:02.006526 containerd[1720]: time="2025-08-13T07:18:02.005560278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:02.006526 containerd[1720]: time="2025-08-13T07:18:02.005676882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:02.042842 systemd[1]: Started cri-containerd-59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10.scope - libcontainer container 59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10. Aug 13 07:18:02.062546 systemd-networkd[1578]: cali11a6f569c64: Link UP Aug 13 07:18:02.065421 systemd-networkd[1578]: cali11a6f569c64: Gained carrier Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.786 [INFO][5233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0 goldmane-768f4c5c69- calico-system 0d2537cf-0c17-4fe1-83ab-ece63f331986 970 0 2025-08-13 07:17:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 goldmane-768f4c5c69-8g2b7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali11a6f569c64 [] [] }} ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.786 [INFO][5233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.872 [INFO][5253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" HandleID="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.873 [INFO][5253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" HandleID="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"goldmane-768f4c5c69-8g2b7", "timestamp":"2025-08-13 07:18:01.87231335 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.873 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.896 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.897 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.956 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.967 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.985 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.990 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.995 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.995 [INFO][5253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:01.999 [INFO][5253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18 Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:02.009 [INFO][5253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:02.035 [INFO][5253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.134/26] block=192.168.91.128/26 handle="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:02.035 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.134/26] handle="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:02.035 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:02.093617 containerd[1720]: 2025-08-13 07:18:02.035 [INFO][5253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.134/26] IPv6=[] ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" HandleID="k8s-pod-network.b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.096918 containerd[1720]: 2025-08-13 07:18:02.058 [INFO][5233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0d2537cf-0c17-4fe1-83ab-ece63f331986", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"goldmane-768f4c5c69-8g2b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali11a6f569c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:02.096918 containerd[1720]: 2025-08-13 07:18:02.059 [INFO][5233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.134/32] ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.096918 containerd[1720]: 2025-08-13 07:18:02.059 [INFO][5233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11a6f569c64 ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.096918 containerd[1720]: 2025-08-13 07:18:02.065 [INFO][5233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.096918 containerd[1720]: 2025-08-13 07:18:02.065 [INFO][5233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0d2537cf-0c17-4fe1-83ab-ece63f331986", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18", Pod:"goldmane-768f4c5c69-8g2b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali11a6f569c64", MAC:"fa:c6:9d:62:f6:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:02.096918 containerd[1720]: 2025-08-13 07:18:02.090 [INFO][5233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18" Namespace="calico-system" Pod="goldmane-768f4c5c69-8g2b7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:02.164993 containerd[1720]: time="2025-08-13T07:18:02.162282603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:02.164993 containerd[1720]: time="2025-08-13T07:18:02.162360506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:02.164993 containerd[1720]: time="2025-08-13T07:18:02.162383206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:02.164993 containerd[1720]: time="2025-08-13T07:18:02.162477110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:02.189308 containerd[1720]: time="2025-08-13T07:18:02.186090212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfzv4,Uid:3bcaff83-98f1-4f1e-9ec2-0de878c93569,Namespace:kube-system,Attempt:1,} returns sandbox id \"59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10\"" Aug 13 07:18:02.194682 containerd[1720]: time="2025-08-13T07:18:02.194639902Z" level=info msg="CreateContainer within sandbox \"59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:18:02.215500 systemd[1]: Started cri-containerd-b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18.scope - libcontainer container b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18. Aug 13 07:18:02.261487 containerd[1720]: time="2025-08-13T07:18:02.259926921Z" level=info msg="StopPodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\"" Aug 13 07:18:02.263210 containerd[1720]: time="2025-08-13T07:18:02.263111829Z" level=info msg="StopPodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\"" Aug 13 07:18:02.274107 containerd[1720]: time="2025-08-13T07:18:02.273898596Z" level=info msg="StopPodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\"" Aug 13 07:18:02.276715 containerd[1720]: time="2025-08-13T07:18:02.276684390Z" level=info msg="CreateContainer within sandbox \"59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6df6078e03902a56123f92386323db150ac9ca489cb629b6c2cbcbe84410d19\"" Aug 13 07:18:02.278352 containerd[1720]: time="2025-08-13T07:18:02.278171441Z" level=info msg="StartContainer for \"b6df6078e03902a56123f92386323db150ac9ca489cb629b6c2cbcbe84410d19\"" Aug 13 07:18:02.284565 systemd-networkd[1578]: cali30b0537660c: Gained IPv6LL Aug 13 07:18:02.389460 systemd[1]: Started cri-containerd-b6df6078e03902a56123f92386323db150ac9ca489cb629b6c2cbcbe84410d19.scope - libcontainer container b6df6078e03902a56123f92386323db150ac9ca489cb629b6c2cbcbe84410d19. Aug 13 07:18:02.444056 containerd[1720]: time="2025-08-13T07:18:02.443597862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8g2b7,Uid:0d2537cf-0c17-4fe1-83ab-ece63f331986,Namespace:calico-system,Attempt:1,} returns sandbox id \"b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18\"" Aug 13 07:18:02.478273 systemd-networkd[1578]: calid0eb99a19f7: Gained IPv6LL Aug 13 07:18:02.509875 containerd[1720]: time="2025-08-13T07:18:02.509505201Z" level=info msg="StartContainer for \"b6df6078e03902a56123f92386323db150ac9ca489cb629b6c2cbcbe84410d19\" returns successfully" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.482 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.482 [INFO][5382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" iface="eth0" netns="/var/run/netns/cni-95db9008-83d4-c883-d0ed-3e7b639d931f" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.482 [INFO][5382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" iface="eth0" netns="/var/run/netns/cni-95db9008-83d4-c883-d0ed-3e7b639d931f" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.482 [INFO][5382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" iface="eth0" netns="/var/run/netns/cni-95db9008-83d4-c883-d0ed-3e7b639d931f" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.483 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.483 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.612 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.614 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.614 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.656 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.657 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.660 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:02.672582 containerd[1720]: 2025-08-13 07:18:02.665 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:02.673864 containerd[1720]: time="2025-08-13T07:18:02.672780249Z" level=info msg="TearDown network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" successfully" Aug 13 07:18:02.673864 containerd[1720]: time="2025-08-13T07:18:02.672812950Z" level=info msg="StopPodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" returns successfully" Aug 13 07:18:02.676271 containerd[1720]: time="2025-08-13T07:18:02.675574744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kngq7,Uid:6dea07cd-503b-45c7-8ebe-51b022e30cd4,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:02.680371 systemd[1]: run-netns-cni\x2d95db9008\x2d83d4\x2dc883\x2dd0ed\x2d3e7b639d931f.mount: Deactivated successfully. Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.537 [INFO][5395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.537 [INFO][5395] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" iface="eth0" netns="/var/run/netns/cni-805b66b6-1314-4a2e-ad53-024d18d8c831" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.539 [INFO][5395] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" iface="eth0" netns="/var/run/netns/cni-805b66b6-1314-4a2e-ad53-024d18d8c831" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.539 [INFO][5395] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" iface="eth0" netns="/var/run/netns/cni-805b66b6-1314-4a2e-ad53-024d18d8c831" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.540 [INFO][5395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.540 [INFO][5395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.665 [INFO][5452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.666 [INFO][5452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.666 [INFO][5452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.690 [WARNING][5452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.690 [INFO][5452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.692 [INFO][5452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:02.697700 containerd[1720]: 2025-08-13 07:18:02.694 [INFO][5395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:02.704016 containerd[1720]: time="2025-08-13T07:18:02.703663598Z" level=info msg="TearDown network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" successfully" Aug 13 07:18:02.704016 containerd[1720]: time="2025-08-13T07:18:02.703733201Z" level=info msg="StopPodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" returns successfully" Aug 13 07:18:02.707736 systemd[1]: run-netns-cni\x2d805b66b6\x2d1314\x2d4a2e\x2dad53\x2d024d18d8c831.mount: Deactivated successfully. Aug 13 07:18:02.712134 containerd[1720]: time="2025-08-13T07:18:02.712099285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d98d4c87-tmh2g,Uid:5ab325b6-c552-42c8-a448-2c9835fe41c3,Namespace:calico-system,Attempt:1,}" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.579 [INFO][5402] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.582 [INFO][5402] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" iface="eth0" netns="/var/run/netns/cni-44ede927-1573-3e42-391d-b4046407bfaf" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.583 [INFO][5402] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" iface="eth0" netns="/var/run/netns/cni-44ede927-1573-3e42-391d-b4046407bfaf" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.583 [INFO][5402] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" iface="eth0" netns="/var/run/netns/cni-44ede927-1573-3e42-391d-b4046407bfaf" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.583 [INFO][5402] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.583 [INFO][5402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.698 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.700 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.702 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.727 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.727 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.734 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:02.743372 containerd[1720]: 2025-08-13 07:18:02.738 [INFO][5402] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:02.744621 containerd[1720]: time="2025-08-13T07:18:02.743676658Z" level=info msg="TearDown network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" successfully" Aug 13 07:18:02.744621 containerd[1720]: time="2025-08-13T07:18:02.743722960Z" level=info msg="StopPodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" returns successfully" Aug 13 07:18:02.744734 containerd[1720]: time="2025-08-13T07:18:02.744704493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-7cwjt,Uid:b6d30009-e3c1-496f-8ea4-de2a0c63018b,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:18:02.958912 systemd-networkd[1578]: calie9f5bad1b01: Link UP Aug 13 07:18:02.959639 systemd-networkd[1578]: calie9f5bad1b01: Gained carrier Aug 13 07:18:03.007506 kubelet[3215]: I0813 07:18:03.005772 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dfzv4" podStartSLOduration=45.005748363 podStartE2EDuration="45.005748363s" podCreationTimestamp="2025-08-13 07:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:02.62513753 +0000 UTC m=+48.477991382" watchObservedRunningTime="2025-08-13 07:18:03.005748363 +0000 UTC m=+48.858602215" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.791 [INFO][5471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0 csi-node-driver- calico-system 6dea07cd-503b-45c7-8ebe-51b022e30cd4 987 0 2025-08-13 07:17:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 csi-node-driver-kngq7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie9f5bad1b01 [] [] }} ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.791 [INFO][5471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.869 [INFO][5493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" HandleID="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.869 [INFO][5493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" HandleID="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"csi-node-driver-kngq7", "timestamp":"2025-08-13 07:18:02.869295726 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.869 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.869 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.869 [INFO][5493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.884 [INFO][5493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.893 [INFO][5493] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.909 [INFO][5493] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.911 [INFO][5493] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.916 [INFO][5493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.916 [INFO][5493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.919 [INFO][5493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36 Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.931 [INFO][5493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.945 [INFO][5493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.135/26] block=192.168.91.128/26 handle="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.945 [INFO][5493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.135/26] handle="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.945 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:03.024284 containerd[1720]: 2025-08-13 07:18:02.945 [INFO][5493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.135/26] IPv6=[] ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" HandleID="k8s-pod-network.6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.027602 containerd[1720]: 2025-08-13 07:18:02.953 [INFO][5471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea07cd-503b-45c7-8ebe-51b022e30cd4", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"csi-node-driver-kngq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9f5bad1b01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:03.027602 containerd[1720]: 2025-08-13 07:18:02.953 [INFO][5471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.135/32] ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.027602 containerd[1720]: 2025-08-13 07:18:02.953 [INFO][5471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9f5bad1b01 ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.027602 containerd[1720]: 2025-08-13 07:18:02.959 [INFO][5471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.027602 containerd[1720]: 2025-08-13 07:18:02.961 [INFO][5471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea07cd-503b-45c7-8ebe-51b022e30cd4", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36", Pod:"csi-node-driver-kngq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9f5bad1b01", MAC:"ba:b2:02:2a:12:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:03.027602 containerd[1720]: 2025-08-13 07:18:03.008 [INFO][5471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36" Namespace="calico-system" Pod="csi-node-driver-kngq7" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:03.129778 containerd[1720]: time="2025-08-13T07:18:03.129281460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:03.129778 containerd[1720]: time="2025-08-13T07:18:03.129367163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:03.129778 containerd[1720]: time="2025-08-13T07:18:03.129387564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:03.129778 containerd[1720]: time="2025-08-13T07:18:03.129555970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:03.180465 systemd[1]: Started cri-containerd-6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36.scope - libcontainer container 6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36. Aug 13 07:18:03.237878 systemd-networkd[1578]: cali7a158eb2135: Link UP Aug 13 07:18:03.243603 systemd-networkd[1578]: cali7a158eb2135: Gained carrier Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:02.934 [INFO][5501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0 calico-apiserver-5cdd967ff- calico-apiserver b6d30009-e3c1-496f-8ea4-de2a0c63018b 990 0 2025-08-13 07:17:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cdd967ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 calico-apiserver-5cdd967ff-7cwjt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a158eb2135 [] [] }} ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:02.934 [INFO][5501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.048 [INFO][5520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.048 [INFO][5520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"calico-apiserver-5cdd967ff-7cwjt", "timestamp":"2025-08-13 07:18:03.048062101 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.051 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.051 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.051 [INFO][5520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.079 [INFO][5520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.106 [INFO][5520] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.133 [INFO][5520] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.139 [INFO][5520] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.144 [INFO][5520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.144 [INFO][5520] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.150 [INFO][5520] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.185 [INFO][5520] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.208 [INFO][5520] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.136/26] block=192.168.91.128/26 handle="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.208 [INFO][5520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.136/26] handle="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.208 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:03.296085 containerd[1720]: 2025-08-13 07:18:03.208 [INFO][5520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.136/26] IPv6=[] ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.298437 containerd[1720]: 2025-08-13 07:18:03.225 [INFO][5501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d30009-e3c1-496f-8ea4-de2a0c63018b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"calico-apiserver-5cdd967ff-7cwjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a158eb2135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:03.298437 containerd[1720]: 2025-08-13 07:18:03.226 [INFO][5501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.136/32] ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.298437 containerd[1720]: 2025-08-13 07:18:03.227 [INFO][5501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a158eb2135 ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.298437 containerd[1720]: 2025-08-13 07:18:03.247 [INFO][5501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.298437 containerd[1720]: 2025-08-13 07:18:03.253 [INFO][5501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d30009-e3c1-496f-8ea4-de2a0c63018b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f", Pod:"calico-apiserver-5cdd967ff-7cwjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a158eb2135", MAC:"82:da:b7:3a:34:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:03.298437 containerd[1720]: 2025-08-13 07:18:03.280 [INFO][5501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Namespace="calico-apiserver" Pod="calico-apiserver-5cdd967ff-7cwjt" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:03.350319 containerd[1720]: time="2025-08-13T07:18:03.346065927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kngq7,Uid:6dea07cd-503b-45c7-8ebe-51b022e30cd4,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36\"" Aug 13 07:18:03.346628 systemd-networkd[1578]: cali41f060ce567: Link UP Aug 13 07:18:03.346887 systemd-networkd[1578]: cali41f060ce567: Gained carrier Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:02.917 [INFO][5482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0 calico-kube-controllers-65d98d4c87- calico-system 5ab325b6-c552-42c8-a448-2c9835fe41c3 989 0 2025-08-13 07:17:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65d98d4c87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 calico-kube-controllers-65d98d4c87-tmh2g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali41f060ce567 [] [] }} ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:02.918 [INFO][5482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.089 [INFO][5515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" HandleID="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.092 [INFO][5515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" HandleID="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046f320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"calico-kube-controllers-65d98d4c87-tmh2g", "timestamp":"2025-08-13 07:18:03.089300502 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.092 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.208 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.208 [INFO][5515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.230 [INFO][5515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.254 [INFO][5515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.281 [INFO][5515] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.289 [INFO][5515] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.296 [INFO][5515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.296 [INFO][5515] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.299 [INFO][5515] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8 Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.316 [INFO][5515] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.334 [INFO][5515] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.137/26] block=192.168.91.128/26 handle="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.334 [INFO][5515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.137/26] handle="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.334 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:03.382330 containerd[1720]: 2025-08-13 07:18:03.334 [INFO][5515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.137/26] IPv6=[] ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" HandleID="k8s-pod-network.572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.383271 containerd[1720]: 2025-08-13 07:18:03.342 [INFO][5482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0", GenerateName:"calico-kube-controllers-65d98d4c87-", Namespace:"calico-system", SelfLink:"", UID:"5ab325b6-c552-42c8-a448-2c9835fe41c3", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d98d4c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"calico-kube-controllers-65d98d4c87-tmh2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41f060ce567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:03.383271 containerd[1720]: 2025-08-13 07:18:03.342 [INFO][5482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.137/32] ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.383271 containerd[1720]: 2025-08-13 07:18:03.342 [INFO][5482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41f060ce567 ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.383271 containerd[1720]: 2025-08-13 07:18:03.346 [INFO][5482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.383271 containerd[1720]: 2025-08-13 07:18:03.347 [INFO][5482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0", GenerateName:"calico-kube-controllers-65d98d4c87-", Namespace:"calico-system", SelfLink:"", UID:"5ab325b6-c552-42c8-a448-2c9835fe41c3", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d98d4c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8", Pod:"calico-kube-controllers-65d98d4c87-tmh2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41f060ce567", MAC:"b6:17:54:73:a8:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:03.383271 containerd[1720]: 2025-08-13 07:18:03.379 [INFO][5482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8" Namespace="calico-system" Pod="calico-kube-controllers-65d98d4c87-tmh2g" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:03.385074 containerd[1720]: time="2025-08-13T07:18:03.384405329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:03.385074 containerd[1720]: time="2025-08-13T07:18:03.384857745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:03.388558 containerd[1720]: time="2025-08-13T07:18:03.387822045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:03.388558 containerd[1720]: time="2025-08-13T07:18:03.388042353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:03.432616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273527875.mount: Deactivated successfully. Aug 13 07:18:03.432748 systemd[1]: run-netns-cni\x2d44ede927\x2d1573\x2d3e42\x2d391d\x2db4046407bfaf.mount: Deactivated successfully. Aug 13 07:18:03.461454 systemd[1]: Started cri-containerd-dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f.scope - libcontainer container dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f. Aug 13 07:18:03.471560 containerd[1720]: time="2025-08-13T07:18:03.471409886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:03.471742 containerd[1720]: time="2025-08-13T07:18:03.471718196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:03.471838 containerd[1720]: time="2025-08-13T07:18:03.471820500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:03.472049 containerd[1720]: time="2025-08-13T07:18:03.472014106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:03.498909 containerd[1720]: time="2025-08-13T07:18:03.498805916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:03.503973 containerd[1720]: time="2025-08-13T07:18:03.503924790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:18:03.507311 containerd[1720]: time="2025-08-13T07:18:03.507278404Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:03.510054 systemd[1]: run-containerd-runc-k8s.io-572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8-runc.bIDolh.mount: Deactivated successfully. Aug 13 07:18:03.513714 containerd[1720]: time="2025-08-13T07:18:03.513469715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:03.515353 containerd[1720]: time="2025-08-13T07:18:03.515312677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.690219698s" Aug 13 07:18:03.515479 containerd[1720]: time="2025-08-13T07:18:03.515459982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:18:03.518654 containerd[1720]: time="2025-08-13T07:18:03.518452684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:18:03.520459 containerd[1720]: time="2025-08-13T07:18:03.520287246Z" level=info msg="CreateContainer within sandbox \"ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:18:03.524439 systemd[1]: Started cri-containerd-572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8.scope - libcontainer container 572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8. Aug 13 07:18:03.566385 containerd[1720]: time="2025-08-13T07:18:03.565841494Z" level=info msg="CreateContainer within sandbox \"ba3ea167acb840303e8c4820970e6d48460e6cf220a5707b4adb58368fffb7ae\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"103208b599c4f410b735949632834d247ff39334ea9be98de85cad10a4127d5c\"" Aug 13 07:18:03.568926 containerd[1720]: time="2025-08-13T07:18:03.568895098Z" level=info msg="StartContainer for \"103208b599c4f410b735949632834d247ff39334ea9be98de85cad10a4127d5c\"" Aug 13 07:18:03.588297 containerd[1720]: time="2025-08-13T07:18:03.588083450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdd967ff-7cwjt,Uid:b6d30009-e3c1-496f-8ea4-de2a0c63018b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\"" Aug 13 07:18:03.624451 systemd[1]: Started cri-containerd-103208b599c4f410b735949632834d247ff39334ea9be98de85cad10a4127d5c.scope - libcontainer container 103208b599c4f410b735949632834d247ff39334ea9be98de85cad10a4127d5c. Aug 13 07:18:03.672554 containerd[1720]: time="2025-08-13T07:18:03.672456617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d98d4c87-tmh2g,Uid:5ab325b6-c552-42c8-a448-2c9835fe41c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8\"" Aug 13 07:18:03.747700 containerd[1720]: time="2025-08-13T07:18:03.747034951Z" level=info msg="StartContainer for \"103208b599c4f410b735949632834d247ff39334ea9be98de85cad10a4127d5c\" returns successfully" Aug 13 07:18:03.820480 systemd-networkd[1578]: calif9ef0e554d9: Gained IPv6LL Aug 13 07:18:04.012481 systemd-networkd[1578]: cali11a6f569c64: Gained IPv6LL Aug 13 07:18:04.396448 systemd-networkd[1578]: cali7a158eb2135: Gained IPv6LL Aug 13 07:18:04.460560 systemd-networkd[1578]: cali41f060ce567: Gained IPv6LL Aug 13 07:18:04.845007 systemd-networkd[1578]: calie9f5bad1b01: Gained IPv6LL Aug 13 07:18:06.302292 containerd[1720]: time="2025-08-13T07:18:06.301947049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.305132 containerd[1720]: time="2025-08-13T07:18:06.305052853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:18:06.309268 containerd[1720]: time="2025-08-13T07:18:06.309201891Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.314792 containerd[1720]: time="2025-08-13T07:18:06.314500169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.315371 containerd[1720]: time="2025-08-13T07:18:06.315333097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.796835811s" Aug 13 07:18:06.315448 containerd[1720]: time="2025-08-13T07:18:06.315373298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:18:06.316541 containerd[1720]: time="2025-08-13T07:18:06.316519136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:18:06.318084 containerd[1720]: time="2025-08-13T07:18:06.317949984Z" level=info msg="CreateContainer within sandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:18:06.362825 containerd[1720]: time="2025-08-13T07:18:06.362785484Z" level=info msg="CreateContainer within sandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\"" Aug 13 07:18:06.363689 containerd[1720]: time="2025-08-13T07:18:06.363615612Z" level=info msg="StartContainer for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\"" Aug 13 07:18:06.404426 systemd[1]: Started cri-containerd-b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119.scope - libcontainer container b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119. Aug 13 07:18:06.449860 containerd[1720]: time="2025-08-13T07:18:06.449748693Z" level=info msg="StartContainer for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" returns successfully" Aug 13 07:18:06.628019 kubelet[3215]: I0813 07:18:06.627868 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d54fbbfdb-qsfwk" podStartSLOduration=4.284259082 podStartE2EDuration="10.627845952s" podCreationTimestamp="2025-08-13 07:17:56 +0000 UTC" firstStartedPulling="2025-08-13 07:17:57.17386958 +0000 UTC m=+43.026723432" lastFinishedPulling="2025-08-13 07:18:03.51745635 +0000 UTC m=+49.370310302" observedRunningTime="2025-08-13 07:18:04.617497928 +0000 UTC m=+50.470351880" watchObservedRunningTime="2025-08-13 07:18:06.627845952 +0000 UTC m=+52.480699904" Aug 13 07:18:06.933631 containerd[1720]: time="2025-08-13T07:18:06.933495177Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:06.938273 containerd[1720]: time="2025-08-13T07:18:06.937307505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:18:06.939457 containerd[1720]: time="2025-08-13T07:18:06.939425275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 622.755634ms" Aug 13 07:18:06.939576 containerd[1720]: time="2025-08-13T07:18:06.939559180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:18:06.940649 containerd[1720]: time="2025-08-13T07:18:06.940601615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:18:06.942691 containerd[1720]: time="2025-08-13T07:18:06.942659184Z" level=info msg="CreateContainer within sandbox \"511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:18:06.985552 containerd[1720]: time="2025-08-13T07:18:06.985510617Z" level=info msg="CreateContainer within sandbox \"511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c98e61aa43cf0249afd7f8899e0df2744f10da42fd9b0df4e9f4f688573634b7\"" Aug 13 07:18:06.986369 containerd[1720]: time="2025-08-13T07:18:06.986325144Z" level=info msg="StartContainer for \"c98e61aa43cf0249afd7f8899e0df2744f10da42fd9b0df4e9f4f688573634b7\"" Aug 13 07:18:07.027451 systemd[1]: Started cri-containerd-c98e61aa43cf0249afd7f8899e0df2744f10da42fd9b0df4e9f4f688573634b7.scope - libcontainer container c98e61aa43cf0249afd7f8899e0df2744f10da42fd9b0df4e9f4f688573634b7. Aug 13 07:18:07.092349 containerd[1720]: time="2025-08-13T07:18:07.092299890Z" level=info msg="StartContainer for \"c98e61aa43cf0249afd7f8899e0df2744f10da42fd9b0df4e9f4f688573634b7\" returns successfully" Aug 13 07:18:07.615305 kubelet[3215]: I0813 07:18:07.615266 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:07.634081 kubelet[3215]: I0813 07:18:07.634007 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cdd967ff-rqqwz" podStartSLOduration=34.344621408 podStartE2EDuration="39.633985612s" podCreationTimestamp="2025-08-13 07:17:28 +0000 UTC" firstStartedPulling="2025-08-13 07:18:01.026996127 +0000 UTC m=+46.879850079" lastFinishedPulling="2025-08-13 07:18:06.316360331 +0000 UTC m=+52.169214283" observedRunningTime="2025-08-13 07:18:06.631294967 +0000 UTC m=+52.484148819" watchObservedRunningTime="2025-08-13 07:18:07.633985612 +0000 UTC m=+53.486839464" Aug 13 07:18:07.634645 kubelet[3215]: I0813 07:18:07.634130 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-966bb757f-8qwrf" podStartSLOduration=32.029121614 podStartE2EDuration="37.634122216s" podCreationTimestamp="2025-08-13 07:17:30 +0000 UTC" firstStartedPulling="2025-08-13 07:18:01.335394206 +0000 UTC m=+47.188248058" lastFinishedPulling="2025-08-13 07:18:06.940394808 +0000 UTC m=+52.793248660" observedRunningTime="2025-08-13 07:18:07.633174484 +0000 UTC m=+53.486028336" watchObservedRunningTime="2025-08-13 07:18:07.634122216 +0000 UTC m=+53.486976068" Aug 13 07:18:09.274178 systemd[1]: Created slice kubepods-besteffort-podaba0e30e_70aa_428b_b01f_24be667e8f9d.slice - libcontainer container kubepods-besteffort-podaba0e30e_70aa_428b_b01f_24be667e8f9d.slice. Aug 13 07:18:09.393250 kubelet[3215]: I0813 07:18:09.393199 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzw9x\" (UniqueName: \"kubernetes.io/projected/aba0e30e-70aa-428b-b01f-24be667e8f9d-kube-api-access-xzw9x\") pod \"calico-apiserver-966bb757f-gbwrw\" (UID: \"aba0e30e-70aa-428b-b01f-24be667e8f9d\") " pod="calico-apiserver/calico-apiserver-966bb757f-gbwrw" Aug 13 07:18:09.394677 kubelet[3215]: I0813 07:18:09.394182 3215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aba0e30e-70aa-428b-b01f-24be667e8f9d-calico-apiserver-certs\") pod \"calico-apiserver-966bb757f-gbwrw\" (UID: \"aba0e30e-70aa-428b-b01f-24be667e8f9d\") " pod="calico-apiserver/calico-apiserver-966bb757f-gbwrw" Aug 13 07:18:09.583702 containerd[1720]: time="2025-08-13T07:18:09.583665837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966bb757f-gbwrw,Uid:aba0e30e-70aa-428b-b01f-24be667e8f9d,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:18:09.805181 systemd-networkd[1578]: caliabafdee381b: Link UP Aug 13 07:18:09.806881 systemd-networkd[1578]: caliabafdee381b: Gained carrier Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.687 [INFO][5842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0 calico-apiserver-966bb757f- calico-apiserver aba0e30e-70aa-428b-b01f-24be667e8f9d 1070 0 2025-08-13 07:18:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:966bb757f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-7346cb15f0 calico-apiserver-966bb757f-gbwrw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliabafdee381b [] [] }} ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.687 [INFO][5842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.726 [INFO][5854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" HandleID="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.727 [INFO][5854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" HandleID="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-7346cb15f0", "pod":"calico-apiserver-966bb757f-gbwrw", "timestamp":"2025-08-13 07:18:09.726973032 +0000 UTC"}, Hostname:"ci-4081.3.5-a-7346cb15f0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.727 [INFO][5854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.727 [INFO][5854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.727 [INFO][5854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-7346cb15f0' Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.739 [INFO][5854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.752 [INFO][5854] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.760 [INFO][5854] ipam/ipam.go 511: Trying affinity for 192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.763 [INFO][5854] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.766 [INFO][5854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.766 [INFO][5854] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.770 [INFO][5854] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.781 [INFO][5854] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.795 [INFO][5854] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.91.138/26] block=192.168.91.128/26 handle="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.795 [INFO][5854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.138/26] handle="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" host="ci-4081.3.5-a-7346cb15f0" Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.795 [INFO][5854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:09.843935 containerd[1720]: 2025-08-13 07:18:09.795 [INFO][5854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.91.138/26] IPv6=[] ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" HandleID="k8s-pod-network.4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.845657 containerd[1720]: 2025-08-13 07:18:09.800 [INFO][5842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0", GenerateName:"calico-apiserver-966bb757f-", Namespace:"calico-apiserver", SelfLink:"", UID:"aba0e30e-70aa-428b-b01f-24be667e8f9d", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966bb757f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"", Pod:"calico-apiserver-966bb757f-gbwrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabafdee381b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:09.845657 containerd[1720]: 2025-08-13 07:18:09.800 [INFO][5842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.138/32] ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.845657 containerd[1720]: 2025-08-13 07:18:09.800 [INFO][5842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabafdee381b ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.845657 containerd[1720]: 2025-08-13 07:18:09.808 [INFO][5842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.845657 containerd[1720]: 2025-08-13 07:18:09.810 [INFO][5842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0", GenerateName:"calico-apiserver-966bb757f-", Namespace:"calico-apiserver", SelfLink:"", UID:"aba0e30e-70aa-428b-b01f-24be667e8f9d", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 18, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966bb757f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d", Pod:"calico-apiserver-966bb757f-gbwrw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabafdee381b", MAC:"ba:33:ef:b9:86:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:09.845657 containerd[1720]: 2025-08-13 07:18:09.838 [INFO][5842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d" Namespace="calico-apiserver" Pod="calico-apiserver-966bb757f-gbwrw" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--gbwrw-eth0" Aug 13 07:18:09.893332 containerd[1720]: time="2025-08-13T07:18:09.892771078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:18:09.895030 containerd[1720]: time="2025-08-13T07:18:09.894304830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:18:09.895030 containerd[1720]: time="2025-08-13T07:18:09.894326130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:09.895030 containerd[1720]: time="2025-08-13T07:18:09.894411633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:18:09.928453 systemd[1]: Started cri-containerd-4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d.scope - libcontainer container 4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d. Aug 13 07:18:09.996048 containerd[1720]: time="2025-08-13T07:18:09.996004632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-966bb757f-gbwrw,Uid:aba0e30e-70aa-428b-b01f-24be667e8f9d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d\"" Aug 13 07:18:10.002876 containerd[1720]: time="2025-08-13T07:18:10.002825460Z" level=info msg="CreateContainer within sandbox \"4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:18:10.041699 containerd[1720]: time="2025-08-13T07:18:10.041549156Z" level=info msg="CreateContainer within sandbox \"4c2ea8d504911d3ef529bbf879ce1a385644d645811a49c0c45b8c3386b9a67d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"58496bfdbbfa9c5a3a7c982d5a706fff3e2c3f1fc9b6cb51522ad328df14feb2\"" Aug 13 07:18:10.042992 containerd[1720]: time="2025-08-13T07:18:10.042903801Z" level=info msg="StartContainer for \"58496bfdbbfa9c5a3a7c982d5a706fff3e2c3f1fc9b6cb51522ad328df14feb2\"" Aug 13 07:18:10.096731 systemd[1]: Started cri-containerd-58496bfdbbfa9c5a3a7c982d5a706fff3e2c3f1fc9b6cb51522ad328df14feb2.scope - libcontainer container 58496bfdbbfa9c5a3a7c982d5a706fff3e2c3f1fc9b6cb51522ad328df14feb2. Aug 13 07:18:10.167791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791059877.mount: Deactivated successfully. Aug 13 07:18:10.388119 containerd[1720]: time="2025-08-13T07:18:10.386509096Z" level=info msg="StartContainer for \"58496bfdbbfa9c5a3a7c982d5a706fff3e2c3f1fc9b6cb51522ad328df14feb2\" returns successfully" Aug 13 07:18:10.702562 kubelet[3215]: I0813 07:18:10.702299 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-966bb757f-gbwrw" podStartSLOduration=1.702084754 podStartE2EDuration="1.702084754s" podCreationTimestamp="2025-08-13 07:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:18:10.701123422 +0000 UTC m=+56.553977274" watchObservedRunningTime="2025-08-13 07:18:10.702084754 +0000 UTC m=+56.554938706" Aug 13 07:18:11.116506 systemd-networkd[1578]: caliabafdee381b: Gained IPv6LL Aug 13 07:18:11.446179 containerd[1720]: time="2025-08-13T07:18:11.446048143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.448933 containerd[1720]: time="2025-08-13T07:18:11.448875637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:18:11.452148 containerd[1720]: time="2025-08-13T07:18:11.452116346Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.458669 containerd[1720]: time="2025-08-13T07:18:11.458635664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:11.460317 containerd[1720]: time="2025-08-13T07:18:11.460279619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.518754073s" Aug 13 07:18:11.460451 containerd[1720]: time="2025-08-13T07:18:11.460321620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:18:11.463271 containerd[1720]: time="2025-08-13T07:18:11.463220317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:18:11.463920 containerd[1720]: time="2025-08-13T07:18:11.463877439Z" level=info msg="CreateContainer within sandbox \"b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:18:11.505611 containerd[1720]: time="2025-08-13T07:18:11.505563034Z" level=info msg="CreateContainer within sandbox \"b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3\"" Aug 13 07:18:11.507991 containerd[1720]: time="2025-08-13T07:18:11.506923079Z" level=info msg="StartContainer for \"e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3\"" Aug 13 07:18:11.551415 systemd[1]: Started cri-containerd-e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3.scope - libcontainer container e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3. Aug 13 07:18:11.613024 containerd[1720]: time="2025-08-13T07:18:11.612956227Z" level=info msg="StartContainer for \"e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3\" returns successfully" Aug 13 07:18:11.677004 kubelet[3215]: I0813 07:18:11.676974 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:11.707591 kubelet[3215]: I0813 07:18:11.705047 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-8g2b7" podStartSLOduration=30.701034082 podStartE2EDuration="39.705023207s" podCreationTimestamp="2025-08-13 07:17:32 +0000 UTC" firstStartedPulling="2025-08-13 07:18:02.458200058 +0000 UTC m=+48.311053910" lastFinishedPulling="2025-08-13 07:18:11.462189083 +0000 UTC m=+57.315043035" observedRunningTime="2025-08-13 07:18:11.702391619 +0000 UTC m=+57.555245571" watchObservedRunningTime="2025-08-13 07:18:11.705023207 +0000 UTC m=+57.557877159" Aug 13 07:18:12.741657 systemd[1]: run-containerd-runc-k8s.io-e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3-runc.ZYDkHI.mount: Deactivated successfully. Aug 13 07:18:14.253739 containerd[1720]: time="2025-08-13T07:18:14.253694910Z" level=info msg="StopPodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\"" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.297 [WARNING][6076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0d2537cf-0c17-4fe1-83ab-ece63f331986", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18", Pod:"goldmane-768f4c5c69-8g2b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali11a6f569c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.298 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.298 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" iface="eth0" netns="" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.298 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.298 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.335 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.335 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.336 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.341 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.342 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.343 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:14.345964 containerd[1720]: 2025-08-13 07:18:14.344 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.346653 containerd[1720]: time="2025-08-13T07:18:14.346014763Z" level=info msg="TearDown network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" successfully" Aug 13 07:18:14.346653 containerd[1720]: time="2025-08-13T07:18:14.346041463Z" level=info msg="StopPodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" returns successfully" Aug 13 07:18:14.346653 containerd[1720]: time="2025-08-13T07:18:14.346591280Z" level=info msg="RemovePodSandbox for \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\"" Aug 13 07:18:14.346653 containerd[1720]: time="2025-08-13T07:18:14.346623081Z" level=info msg="Forcibly stopping sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\"" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.380 [WARNING][6098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0d2537cf-0c17-4fe1-83ab-ece63f331986", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"b6832f46f885445ebbadaa034b437fcd1fbf43eff60684ebd484a50c2b12ac18", Pod:"goldmane-768f4c5c69-8g2b7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali11a6f569c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.381 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.381 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" iface="eth0" netns="" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.381 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.381 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.406 [INFO][6105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.406 [INFO][6105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.406 [INFO][6105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.412 [WARNING][6105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.412 [INFO][6105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" HandleID="k8s-pod-network.503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-goldmane--768f4c5c69--8g2b7-eth0" Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.414 [INFO][6105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:14.417150 containerd[1720]: 2025-08-13 07:18:14.415 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a" Aug 13 07:18:14.418063 containerd[1720]: time="2025-08-13T07:18:14.417195862Z" level=info msg="TearDown network for sandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" successfully" Aug 13 07:18:16.453321 containerd[1720]: time="2025-08-13T07:18:16.453241578Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:16.454010 containerd[1720]: time="2025-08-13T07:18:16.453356982Z" level=info msg="RemovePodSandbox \"503fe37df381fe21da995ce3a66345086ca31818b82c4876afa5769c2dc5850a\" returns successfully" Aug 13 07:18:16.454146 containerd[1720]: time="2025-08-13T07:18:16.454070804Z" level=info msg="StopPodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\"" Aug 13 07:18:16.462392 containerd[1720]: time="2025-08-13T07:18:16.462208355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:18:16.463553 containerd[1720]: time="2025-08-13T07:18:16.463109083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:16.477685 containerd[1720]: time="2025-08-13T07:18:16.477635032Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:16.479691 containerd[1720]: time="2025-08-13T07:18:16.479648794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:16.484603 containerd[1720]: time="2025-08-13T07:18:16.484560746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 5.021297428s" Aug 13 07:18:16.484713 containerd[1720]: time="2025-08-13T07:18:16.484614648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:18:16.486666 containerd[1720]: time="2025-08-13T07:18:16.486638210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:18:16.495314 containerd[1720]: time="2025-08-13T07:18:16.493529423Z" level=info msg="CreateContainer within sandbox \"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.523 [WARNING][6123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d30009-e3c1-496f-8ea4-de2a0c63018b", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f", Pod:"calico-apiserver-5cdd967ff-7cwjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a158eb2135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.523 [INFO][6123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.523 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" iface="eth0" netns="" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.523 [INFO][6123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.523 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.542 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.542 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.542 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.549 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.549 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.551 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:16.553862 containerd[1720]: 2025-08-13 07:18:16.552 [INFO][6123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.554860 containerd[1720]: time="2025-08-13T07:18:16.553894389Z" level=info msg="TearDown network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" successfully" Aug 13 07:18:16.554860 containerd[1720]: time="2025-08-13T07:18:16.553922989Z" level=info msg="StopPodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" returns successfully" Aug 13 07:18:16.554860 containerd[1720]: time="2025-08-13T07:18:16.554470106Z" level=info msg="RemovePodSandbox for \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\"" Aug 13 07:18:16.554860 containerd[1720]: time="2025-08-13T07:18:16.554505607Z" level=info msg="Forcibly stopping sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\"" Aug 13 07:18:16.567807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004303772.mount: Deactivated successfully. Aug 13 07:18:16.572043 containerd[1720]: time="2025-08-13T07:18:16.572009748Z" level=info msg="CreateContainer within sandbox \"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6d62e21535d1616b5e31a90334dc028c7e61e9d184472be77c5735113b9d38da\"" Aug 13 07:18:16.573420 containerd[1720]: time="2025-08-13T07:18:16.573295588Z" level=info msg="StartContainer for \"6d62e21535d1616b5e31a90334dc028c7e61e9d184472be77c5735113b9d38da\"" Aug 13 07:18:16.625448 systemd[1]: Started cri-containerd-6d62e21535d1616b5e31a90334dc028c7e61e9d184472be77c5735113b9d38da.scope - libcontainer container 6d62e21535d1616b5e31a90334dc028c7e61e9d184472be77c5735113b9d38da. Aug 13 07:18:16.672981 containerd[1720]: time="2025-08-13T07:18:16.672941767Z" level=info msg="StartContainer for \"6d62e21535d1616b5e31a90334dc028c7e61e9d184472be77c5735113b9d38da\" returns successfully" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.640 [WARNING][6145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d30009-e3c1-496f-8ea4-de2a0c63018b", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f", Pod:"calico-apiserver-5cdd967ff-7cwjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a158eb2135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.641 [INFO][6145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.641 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" iface="eth0" netns="" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.641 [INFO][6145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.641 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.677 [INFO][6176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.677 [INFO][6176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.677 [INFO][6176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.686 [WARNING][6176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.686 [INFO][6176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" HandleID="k8s-pod-network.d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.690 [INFO][6176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:16.694540 containerd[1720]: 2025-08-13 07:18:16.691 [INFO][6145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0" Aug 13 07:18:16.695558 containerd[1720]: time="2025-08-13T07:18:16.694874945Z" level=info msg="TearDown network for sandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" successfully" Aug 13 07:18:16.713962 containerd[1720]: time="2025-08-13T07:18:16.713840831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:16.714397 containerd[1720]: time="2025-08-13T07:18:16.714217943Z" level=info msg="RemovePodSandbox \"d071fa477a1d4d4bffc7c1798a468f40f65c110b6acfe0f31ec09f777ec761e0\" returns successfully" Aug 13 07:18:16.714760 containerd[1720]: time="2025-08-13T07:18:16.714727359Z" level=info msg="StopPodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\"" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.747 [WARNING][6202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"d895fcd6-d479-4f4e-87f8-3b6aee927688", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c", Pod:"calico-apiserver-5cdd967ff-rqqwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0eb99a19f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.747 [INFO][6202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.747 [INFO][6202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" iface="eth0" netns="" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.747 [INFO][6202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.747 [INFO][6202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.772 [INFO][6209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.772 [INFO][6209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.772 [INFO][6209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.780 [WARNING][6209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.780 [INFO][6209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.781 [INFO][6209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:16.786078 containerd[1720]: 2025-08-13 07:18:16.783 [INFO][6202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.786078 containerd[1720]: time="2025-08-13T07:18:16.785766754Z" level=info msg="TearDown network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" successfully" Aug 13 07:18:16.786078 containerd[1720]: time="2025-08-13T07:18:16.785873857Z" level=info msg="StopPodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" returns successfully" Aug 13 07:18:16.788000 containerd[1720]: time="2025-08-13T07:18:16.787544809Z" level=info msg="RemovePodSandbox for \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\"" Aug 13 07:18:16.788000 containerd[1720]: time="2025-08-13T07:18:16.787592210Z" level=info msg="Forcibly stopping sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\"" Aug 13 07:18:16.829909 containerd[1720]: time="2025-08-13T07:18:16.829841416Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:16.835315 containerd[1720]: time="2025-08-13T07:18:16.833894841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:18:16.836201 containerd[1720]: time="2025-08-13T07:18:16.836161711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 348.249262ms" Aug 13 07:18:16.836336 containerd[1720]: time="2025-08-13T07:18:16.836213713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:18:16.838970 containerd[1720]: time="2025-08-13T07:18:16.837901265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:18:16.839380 containerd[1720]: time="2025-08-13T07:18:16.839347409Z" level=info msg="CreateContainer within sandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.822 [WARNING][6223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0", GenerateName:"calico-apiserver-5cdd967ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"d895fcd6-d479-4f4e-87f8-3b6aee927688", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdd967ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c", Pod:"calico-apiserver-5cdd967ff-rqqwz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0eb99a19f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.822 [INFO][6223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.822 [INFO][6223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" iface="eth0" netns="" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.822 [INFO][6223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.822 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.848 [INFO][6230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.848 [INFO][6230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.848 [INFO][6230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.855 [WARNING][6230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.855 [INFO][6230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" HandleID="k8s-pod-network.a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.856 [INFO][6230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:16.867911 containerd[1720]: 2025-08-13 07:18:16.862 [INFO][6223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6" Aug 13 07:18:16.868796 containerd[1720]: time="2025-08-13T07:18:16.868767719Z" level=info msg="TearDown network for sandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" successfully" Aug 13 07:18:16.880932 containerd[1720]: time="2025-08-13T07:18:16.880879193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:16.881074 containerd[1720]: time="2025-08-13T07:18:16.880965195Z" level=info msg="RemovePodSandbox \"a3879d2e3b307ec500297083d3b9d2f3b4f784e96b3bc875e320cdefc96b56f6\" returns successfully" Aug 13 07:18:16.881552 containerd[1720]: time="2025-08-13T07:18:16.881524113Z" level=info msg="StopPodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\"" Aug 13 07:18:16.888264 containerd[1720]: time="2025-08-13T07:18:16.888224720Z" level=info msg="CreateContainer within sandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\"" Aug 13 07:18:16.889447 containerd[1720]: time="2025-08-13T07:18:16.889329954Z" level=info msg="StartContainer for \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\"" Aug 13 07:18:16.932460 systemd[1]: Started cri-containerd-41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc.scope - libcontainer container 41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc. Aug 13 07:18:16.997171 containerd[1720]: time="2025-08-13T07:18:16.997010281Z" level=info msg="StartContainer for \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\" returns successfully" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.944 [WARNING][6244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0", GenerateName:"calico-apiserver-966bb757f-", Namespace:"calico-apiserver", SelfLink:"", UID:"31ff16bd-65fa-4475-be19-58aa527037ea", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966bb757f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f", Pod:"calico-apiserver-966bb757f-8qwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30b0537660c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.944 [INFO][6244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.944 [INFO][6244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" iface="eth0" netns="" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.944 [INFO][6244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.944 [INFO][6244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.978 [INFO][6277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.978 [INFO][6277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.978 [INFO][6277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.988 [WARNING][6277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.988 [INFO][6277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.992 [INFO][6277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.002072 containerd[1720]: 2025-08-13 07:18:16.998 [INFO][6244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.002072 containerd[1720]: time="2025-08-13T07:18:17.001943534Z" level=info msg="TearDown network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" successfully" Aug 13 07:18:17.002072 containerd[1720]: time="2025-08-13T07:18:17.001975035Z" level=info msg="StopPodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" returns successfully" Aug 13 07:18:17.003563 containerd[1720]: time="2025-08-13T07:18:17.003340677Z" level=info msg="RemovePodSandbox for \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\"" Aug 13 07:18:17.003563 containerd[1720]: time="2025-08-13T07:18:17.003370378Z" level=info msg="Forcibly stopping sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\"" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.058 [WARNING][6301] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0", GenerateName:"calico-apiserver-966bb757f-", Namespace:"calico-apiserver", SelfLink:"", UID:"31ff16bd-65fa-4475-be19-58aa527037ea", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"966bb757f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"511b807b98d1987df2130c68bd79ac4e5c227ae2c7d2689cbabe9afe84e1289f", Pod:"calico-apiserver-966bb757f-8qwrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali30b0537660c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.059 [INFO][6301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.059 [INFO][6301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" iface="eth0" netns="" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.059 [INFO][6301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.059 [INFO][6301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.091 [INFO][6311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.091 [INFO][6311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.091 [INFO][6311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.098 [WARNING][6311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.099 [INFO][6311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" HandleID="k8s-pod-network.f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--966bb757f--8qwrf-eth0" Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.101 [INFO][6311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.104384 containerd[1720]: 2025-08-13 07:18:17.102 [INFO][6301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3" Aug 13 07:18:17.106274 containerd[1720]: time="2025-08-13T07:18:17.104345198Z" level=info msg="TearDown network for sandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" successfully" Aug 13 07:18:17.113208 containerd[1720]: time="2025-08-13T07:18:17.113157170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:17.113450 containerd[1720]: time="2025-08-13T07:18:17.113430679Z" level=info msg="RemovePodSandbox \"f9e1eeac4f41c5a36da1a12b714bf4c75a4438e6ee30e6c97125e343dcbeeef3\" returns successfully" Aug 13 07:18:17.113980 containerd[1720]: time="2025-08-13T07:18:17.113958695Z" level=info msg="StopPodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\"" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.157 [WARNING][6327] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea07cd-503b-45c7-8ebe-51b022e30cd4", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36", Pod:"csi-node-driver-kngq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9f5bad1b01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.158 [INFO][6327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.158 [INFO][6327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" iface="eth0" netns="" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.159 [INFO][6327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.159 [INFO][6327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.189 [INFO][6335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.189 [INFO][6335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.189 [INFO][6335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.197 [WARNING][6335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.197 [INFO][6335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.199 [INFO][6335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.203379 containerd[1720]: 2025-08-13 07:18:17.201 [INFO][6327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.203882 containerd[1720]: time="2025-08-13T07:18:17.203428560Z" level=info msg="TearDown network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" successfully" Aug 13 07:18:17.203882 containerd[1720]: time="2025-08-13T07:18:17.203459061Z" level=info msg="StopPodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" returns successfully" Aug 13 07:18:17.205042 containerd[1720]: time="2025-08-13T07:18:17.204639097Z" level=info msg="RemovePodSandbox for \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\"" Aug 13 07:18:17.205042 containerd[1720]: time="2025-08-13T07:18:17.204677599Z" level=info msg="Forcibly stopping sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\"" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.259 [WARNING][6349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea07cd-503b-45c7-8ebe-51b022e30cd4", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36", Pod:"csi-node-driver-kngq7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie9f5bad1b01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.259 [INFO][6349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.259 [INFO][6349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" iface="eth0" netns="" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.259 [INFO][6349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.259 [INFO][6349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.305 [INFO][6356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.306 [INFO][6356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.306 [INFO][6356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.315 [WARNING][6356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.315 [INFO][6356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" HandleID="k8s-pod-network.f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-csi--node--driver--kngq7-eth0" Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.316 [INFO][6356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.320346 containerd[1720]: 2025-08-13 07:18:17.319 [INFO][6349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c" Aug 13 07:18:17.321026 containerd[1720]: time="2025-08-13T07:18:17.320392874Z" level=info msg="TearDown network for sandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" successfully" Aug 13 07:18:17.330951 containerd[1720]: time="2025-08-13T07:18:17.330078774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:17.330951 containerd[1720]: time="2025-08-13T07:18:17.330166476Z" level=info msg="RemovePodSandbox \"f033dd7936d5311977a2bc5a7f43a685748f30556b07e0a83b949405f36fda3c\" returns successfully" Aug 13 07:18:17.330951 containerd[1720]: time="2025-08-13T07:18:17.330684592Z" level=info msg="StopPodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\"" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.386 [WARNING][6370] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0", GenerateName:"calico-kube-controllers-65d98d4c87-", Namespace:"calico-system", SelfLink:"", UID:"5ab325b6-c552-42c8-a448-2c9835fe41c3", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d98d4c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8", Pod:"calico-kube-controllers-65d98d4c87-tmh2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41f060ce567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.387 [INFO][6370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.387 [INFO][6370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" iface="eth0" netns="" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.387 [INFO][6370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.387 [INFO][6370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.418 [INFO][6377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.418 [INFO][6377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.419 [INFO][6377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.426 [WARNING][6377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.426 [INFO][6377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.428 [INFO][6377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.431052 containerd[1720]: 2025-08-13 07:18:17.429 [INFO][6370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.431792 containerd[1720]: time="2025-08-13T07:18:17.431094395Z" level=info msg="TearDown network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" successfully" Aug 13 07:18:17.431792 containerd[1720]: time="2025-08-13T07:18:17.431122196Z" level=info msg="StopPodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" returns successfully" Aug 13 07:18:17.433061 containerd[1720]: time="2025-08-13T07:18:17.432716745Z" level=info msg="RemovePodSandbox for \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\"" Aug 13 07:18:17.433061 containerd[1720]: time="2025-08-13T07:18:17.432756546Z" level=info msg="Forcibly stopping sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\"" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.478 [WARNING][6391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0", GenerateName:"calico-kube-controllers-65d98d4c87-", Namespace:"calico-system", SelfLink:"", UID:"5ab325b6-c552-42c8-a448-2c9835fe41c3", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d98d4c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8", Pod:"calico-kube-controllers-65d98d4c87-tmh2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41f060ce567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.478 [INFO][6391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.478 [INFO][6391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" iface="eth0" netns="" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.478 [INFO][6391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.478 [INFO][6391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.507 [INFO][6399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.507 [INFO][6399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.507 [INFO][6399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.515 [WARNING][6399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.515 [INFO][6399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" HandleID="k8s-pod-network.adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--kube--controllers--65d98d4c87--tmh2g-eth0" Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.517 [INFO][6399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.521374 containerd[1720]: 2025-08-13 07:18:17.519 [INFO][6391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa" Aug 13 07:18:17.522607 containerd[1720]: time="2025-08-13T07:18:17.521419386Z" level=info msg="TearDown network for sandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" successfully" Aug 13 07:18:17.533618 containerd[1720]: time="2025-08-13T07:18:17.533163749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:17.533618 containerd[1720]: time="2025-08-13T07:18:17.533260652Z" level=info msg="RemovePodSandbox \"adeed832ca7e1cbee1de338dacba8cac4a141699255c24ebb3eabd7fb3d930aa\" returns successfully" Aug 13 07:18:17.534306 containerd[1720]: time="2025-08-13T07:18:17.533970974Z" level=info msg="StopPodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\"" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.593 [WARNING][6413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fd5fa15d-dd4b-47f2-8c06-e769c8807083", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461", Pod:"coredns-668d6bf9bc-672s8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali842127915f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.595 [INFO][6413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.595 [INFO][6413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" iface="eth0" netns="" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.595 [INFO][6413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.595 [INFO][6413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.645 [INFO][6421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.645 [INFO][6421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.645 [INFO][6421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.654 [WARNING][6421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.654 [INFO][6421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.661 [INFO][6421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.666003 containerd[1720]: 2025-08-13 07:18:17.663 [INFO][6413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.671001 containerd[1720]: time="2025-08-13T07:18:17.668609835Z" level=info msg="TearDown network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" successfully" Aug 13 07:18:17.671001 containerd[1720]: time="2025-08-13T07:18:17.668663036Z" level=info msg="StopPodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" returns successfully" Aug 13 07:18:17.671001 containerd[1720]: time="2025-08-13T07:18:17.670682499Z" level=info msg="RemovePodSandbox for \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\"" Aug 13 07:18:17.671001 containerd[1720]: time="2025-08-13T07:18:17.670728500Z" level=info msg="Forcibly stopping sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\"" Aug 13 07:18:17.717854 containerd[1720]: time="2025-08-13T07:18:17.716561616Z" level=info msg="StopContainer for \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\" with timeout 30 (s)" Aug 13 07:18:17.718345 containerd[1720]: time="2025-08-13T07:18:17.718197067Z" level=info msg="Stop container \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\" with signal terminated" Aug 13 07:18:17.741114 systemd[1]: cri-containerd-41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc.scope: Deactivated successfully. Aug 13 07:18:17.792972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc-rootfs.mount: Deactivated successfully. Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.771 [WARNING][6435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fd5fa15d-dd4b-47f2-8c06-e769c8807083", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"090d7e03cf36a2b66320d52b48d46fff5bc62d030a6a3efc8ad17ceb5bbf7461", Pod:"coredns-668d6bf9bc-672s8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali842127915f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.772 [INFO][6435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.772 [INFO][6435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" iface="eth0" netns="" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.772 [INFO][6435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.772 [INFO][6435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.809 [INFO][6463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.809 [INFO][6463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.810 [INFO][6463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.815 [WARNING][6463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.815 [INFO][6463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" HandleID="k8s-pod-network.9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--672s8-eth0" Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.817 [INFO][6463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:17.819555 containerd[1720]: 2025-08-13 07:18:17.818 [INFO][6435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a" Aug 13 07:18:17.821620 containerd[1720]: time="2025-08-13T07:18:17.820069715Z" level=info msg="TearDown network for sandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" successfully" Aug 13 07:18:18.390798 kubelet[3215]: I0813 07:18:18.387765 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:18.418836 kubelet[3215]: I0813 07:18:18.418516 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cdd967ff-7cwjt" podStartSLOduration=37.172303613 podStartE2EDuration="50.418493307s" podCreationTimestamp="2025-08-13 07:17:28 +0000 UTC" firstStartedPulling="2025-08-13 07:18:03.590886945 +0000 UTC m=+49.443740897" lastFinishedPulling="2025-08-13 07:18:16.837076739 +0000 UTC m=+62.689930591" observedRunningTime="2025-08-13 07:18:17.752338422 +0000 UTC m=+63.605192274" watchObservedRunningTime="2025-08-13 07:18:18.418493307 +0000 UTC m=+64.271347259" Aug 13 07:18:18.466655 kubelet[3215]: I0813 07:18:18.466611 3215 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:18:18.468612 containerd[1720]: time="2025-08-13T07:18:18.468564254Z" level=info msg="StopContainer for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" with timeout 30 (s)" Aug 13 07:18:18.469130 containerd[1720]: time="2025-08-13T07:18:18.469092370Z" level=info msg="Stop container \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" with signal terminated" Aug 13 07:18:18.500719 systemd[1]: cri-containerd-b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119.scope: Deactivated successfully. Aug 13 07:18:18.501372 systemd[1]: cri-containerd-b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119.scope: Consumed 1.646s CPU time. Aug 13 07:18:18.532894 containerd[1720]: time="2025-08-13T07:18:18.532816940Z" level=info msg="shim disconnected" id=b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119 namespace=k8s.io Aug 13 07:18:18.532894 containerd[1720]: time="2025-08-13T07:18:18.532891942Z" level=warning msg="cleaning up after shim disconnected" id=b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119 namespace=k8s.io Aug 13 07:18:18.532894 containerd[1720]: time="2025-08-13T07:18:18.532903442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:18.536760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119-rootfs.mount: Deactivated successfully. Aug 13 07:18:19.140158 containerd[1720]: time="2025-08-13T07:18:19.139937200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:19.140158 containerd[1720]: time="2025-08-13T07:18:19.140045504Z" level=info msg="RemovePodSandbox \"9ffe987e478a11b9ccdefce8babc1ea061f61a8959e7f9b22fa1ed21f9dcd85a\" returns successfully" Aug 13 07:18:19.143011 containerd[1720]: time="2025-08-13T07:18:19.141840759Z" level=info msg="StopPodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\"" Aug 13 07:18:19.159614 containerd[1720]: time="2025-08-13T07:18:19.159525806Z" level=info msg="shim disconnected" id=41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc namespace=k8s.io Aug 13 07:18:19.159614 containerd[1720]: time="2025-08-13T07:18:19.159607508Z" level=warning msg="cleaning up after shim disconnected" id=41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc namespace=k8s.io Aug 13 07:18:19.159614 containerd[1720]: time="2025-08-13T07:18:19.159619408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:19.168161 containerd[1720]: time="2025-08-13T07:18:19.168114871Z" level=info msg="StopContainer for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" returns successfully" Aug 13 07:18:19.169053 containerd[1720]: time="2025-08-13T07:18:19.169018699Z" level=info msg="StopPodSandbox for \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\"" Aug 13 07:18:19.169165 containerd[1720]: time="2025-08-13T07:18:19.169082101Z" level=info msg="Container to stop \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:18:19.177076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c-shm.mount: Deactivated successfully. Aug 13 07:18:19.189692 systemd[1]: cri-containerd-d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c.scope: Deactivated successfully. Aug 13 07:18:19.209602 containerd[1720]: time="2025-08-13T07:18:19.209556652Z" level=info msg="StopContainer for \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\" returns successfully" Aug 13 07:18:19.211333 containerd[1720]: time="2025-08-13T07:18:19.211303406Z" level=info msg="StopPodSandbox for \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\"" Aug 13 07:18:19.211504 containerd[1720]: time="2025-08-13T07:18:19.211481311Z" level=info msg="Container to stop \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:18:19.220597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f-shm.mount: Deactivated successfully. Aug 13 07:18:19.243713 systemd[1]: cri-containerd-dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f.scope: Deactivated successfully. Aug 13 07:18:19.253759 containerd[1720]: time="2025-08-13T07:18:19.253679515Z" level=info msg="shim disconnected" id=d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c namespace=k8s.io Aug 13 07:18:19.253759 containerd[1720]: time="2025-08-13T07:18:19.253745517Z" level=warning msg="cleaning up after shim disconnected" id=d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c namespace=k8s.io Aug 13 07:18:19.253759 containerd[1720]: time="2025-08-13T07:18:19.253759618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:19.258348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c-rootfs.mount: Deactivated successfully. Aug 13 07:18:19.295005 containerd[1720]: time="2025-08-13T07:18:19.294946190Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:18:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:18:19.302288 containerd[1720]: time="2025-08-13T07:18:19.300905074Z" level=info msg="shim disconnected" id=dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f namespace=k8s.io Aug 13 07:18:19.302288 containerd[1720]: time="2025-08-13T07:18:19.301023578Z" level=warning msg="cleaning up after shim disconnected" id=dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f namespace=k8s.io Aug 13 07:18:19.302288 containerd[1720]: time="2025-08-13T07:18:19.301039479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:18:19.309154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f-rootfs.mount: Deactivated successfully. Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.234 [WARNING][6517] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.234 [INFO][6517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.234 [INFO][6517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" iface="eth0" netns="" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.234 [INFO][6517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.234 [INFO][6517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.325 [INFO][6563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.326 [INFO][6563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.326 [INFO][6563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.338 [WARNING][6563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.338 [INFO][6563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.341 [INFO][6563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:19.349327 containerd[1720]: 2025-08-13 07:18:19.346 [INFO][6517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.350679 containerd[1720]: time="2025-08-13T07:18:19.349587279Z" level=info msg="TearDown network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" successfully" Aug 13 07:18:19.350679 containerd[1720]: time="2025-08-13T07:18:19.349617680Z" level=info msg="StopPodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" returns successfully" Aug 13 07:18:19.350679 containerd[1720]: time="2025-08-13T07:18:19.350484806Z" level=info msg="RemovePodSandbox for \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\"" Aug 13 07:18:19.350679 containerd[1720]: time="2025-08-13T07:18:19.350518507Z" level=info msg="Forcibly stopping sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\"" Aug 13 07:18:19.472086 systemd-networkd[1578]: calid0eb99a19f7: Link DOWN Aug 13 07:18:19.472104 systemd-networkd[1578]: calid0eb99a19f7: Lost carrier Aug 13 07:18:19.492456 systemd-networkd[1578]: cali7a158eb2135: Link DOWN Aug 13 07:18:19.492466 systemd-networkd[1578]: cali7a158eb2135: Lost carrier Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.408 [WARNING][6643] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.408 [INFO][6643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.408 [INFO][6643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" iface="eth0" netns="" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.408 [INFO][6643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.408 [INFO][6643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.480 [INFO][6651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.480 [INFO][6651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.480 [INFO][6651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.500 [WARNING][6651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.500 [INFO][6651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" HandleID="k8s-pod-network.20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Workload="ci--4081.3.5--a--7346cb15f0-k8s-whisker--7dc9959b6b--zhz6g-eth0" Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.503 [INFO][6651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:19.510075 containerd[1720]: 2025-08-13 07:18:19.507 [INFO][6643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68" Aug 13 07:18:19.512927 containerd[1720]: time="2025-08-13T07:18:19.510120539Z" level=info msg="TearDown network for sandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" successfully" Aug 13 07:18:19.545932 containerd[1720]: time="2025-08-13T07:18:19.545629337Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:19.550136 containerd[1720]: time="2025-08-13T07:18:19.547857005Z" level=info msg="RemovePodSandbox \"20437ce20e30120960b1467241ea0904795f3e8e7bc5b5e5a26aba0e3a6bad68\" returns successfully" Aug 13 07:18:19.550136 containerd[1720]: time="2025-08-13T07:18:19.548680531Z" level=info msg="StopPodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\"" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.466 [INFO][6618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.466 [INFO][6618] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" iface="eth0" netns="/var/run/netns/cni-ab738e73-5677-f9f9-f67f-023e30d1846b" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.468 [INFO][6618] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" iface="eth0" netns="/var/run/netns/cni-ab738e73-5677-f9f9-f67f-023e30d1846b" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.488 [INFO][6618] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" after=21.700471ms iface="eth0" netns="/var/run/netns/cni-ab738e73-5677-f9f9-f67f-023e30d1846b" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.488 [INFO][6618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.488 [INFO][6618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.552 [INFO][6662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.552 [INFO][6662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.554 [INFO][6662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.647 [INFO][6662] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.647 [INFO][6662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.649 [INFO][6662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:19.661062 containerd[1720]: 2025-08-13 07:18:19.656 [INFO][6618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:18:19.667783 containerd[1720]: time="2025-08-13T07:18:19.666746879Z" level=info msg="TearDown network for sandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" successfully" Aug 13 07:18:19.667783 containerd[1720]: time="2025-08-13T07:18:19.667339998Z" level=info msg="StopPodSandbox for \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" returns successfully" Aug 13 07:18:19.674004 systemd[1]: run-netns-cni\x2dab738e73\x2d5677\x2df9f9\x2df67f\x2d023e30d1846b.mount: Deactivated successfully. Aug 13 07:18:19.737011 kubelet[3215]: I0813 07:18:19.736831 3215 scope.go:117] "RemoveContainer" containerID="b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119" Aug 13 07:18:19.739727 containerd[1720]: time="2025-08-13T07:18:19.739355823Z" level=info msg="RemoveContainer for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\"" Aug 13 07:18:19.756698 kubelet[3215]: I0813 07:18:19.756666 3215 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:18:19.762060 containerd[1720]: time="2025-08-13T07:18:19.761009492Z" level=info msg="RemoveContainer for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" returns successfully" Aug 13 07:18:19.762205 kubelet[3215]: I0813 07:18:19.761317 3215 scope.go:117] "RemoveContainer" containerID="b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119" Aug 13 07:18:19.762488 containerd[1720]: time="2025-08-13T07:18:19.762449237Z" level=error msg="ContainerStatus for \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\": not found" Aug 13 07:18:19.762678 kubelet[3215]: E0813 07:18:19.762607 3215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\": not found" containerID="b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119" Aug 13 07:18:19.762678 kubelet[3215]: I0813 07:18:19.762642 3215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119"} err="failed to get container status \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6ec00907ad002fa4e0903f642bd973d7c6bad7baf0c91d9ff0344909ce06119\": not found" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.479 [INFO][6631] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.479 [INFO][6631] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" iface="eth0" netns="/var/run/netns/cni-1f4bb425-8070-af28-91eb-c240ea2350f6" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.479 [INFO][6631] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" iface="eth0" netns="/var/run/netns/cni-1f4bb425-8070-af28-91eb-c240ea2350f6" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.501 [INFO][6631] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" after=22.6563ms iface="eth0" netns="/var/run/netns/cni-1f4bb425-8070-af28-91eb-c240ea2350f6" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.502 [INFO][6631] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.502 [INFO][6631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.607 [INFO][6666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.608 [INFO][6666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.650 [INFO][6666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.758 [INFO][6666] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.758 [INFO][6666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.760 [INFO][6666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:19.766692 containerd[1720]: 2025-08-13 07:18:19.762 [INFO][6631] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:18:19.767543 containerd[1720]: time="2025-08-13T07:18:19.766938275Z" level=info msg="TearDown network for sandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" successfully" Aug 13 07:18:19.767543 containerd[1720]: time="2025-08-13T07:18:19.766962976Z" level=info msg="StopPodSandbox for \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" returns successfully" Aug 13 07:18:19.789882 kubelet[3215]: I0813 07:18:19.789412 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d895fcd6-d479-4f4e-87f8-3b6aee927688-calico-apiserver-certs\") pod \"d895fcd6-d479-4f4e-87f8-3b6aee927688\" (UID: \"d895fcd6-d479-4f4e-87f8-3b6aee927688\") " Aug 13 07:18:19.789882 kubelet[3215]: I0813 07:18:19.789474 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr6sj\" (UniqueName: \"kubernetes.io/projected/d895fcd6-d479-4f4e-87f8-3b6aee927688-kube-api-access-pr6sj\") pod \"d895fcd6-d479-4f4e-87f8-3b6aee927688\" (UID: \"d895fcd6-d479-4f4e-87f8-3b6aee927688\") " Aug 13 07:18:19.800427 kubelet[3215]: I0813 07:18:19.800377 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d895fcd6-d479-4f4e-87f8-3b6aee927688-kube-api-access-pr6sj" (OuterVolumeSpecName: "kube-api-access-pr6sj") pod "d895fcd6-d479-4f4e-87f8-3b6aee927688" (UID: "d895fcd6-d479-4f4e-87f8-3b6aee927688"). InnerVolumeSpecName "kube-api-access-pr6sj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:18:19.801276 kubelet[3215]: I0813 07:18:19.800669 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d895fcd6-d479-4f4e-87f8-3b6aee927688-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d895fcd6-d479-4f4e-87f8-3b6aee927688" (UID: "d895fcd6-d479-4f4e-87f8-3b6aee927688"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.705 [WARNING][6689] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bcaff83-98f1-4f1e-9ec2-0de878c93569", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10", Pod:"coredns-668d6bf9bc-dfzv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9ef0e554d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.705 [INFO][6689] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.705 [INFO][6689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" iface="eth0" netns="" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.705 [INFO][6689] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.705 [INFO][6689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.799 [INFO][6703] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.799 [INFO][6703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.799 [INFO][6703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.813 [WARNING][6703] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.813 [INFO][6703] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.817 [INFO][6703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:19.821440 containerd[1720]: 2025-08-13 07:18:19.819 [INFO][6689] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.822917 containerd[1720]: time="2025-08-13T07:18:19.822527793Z" level=info msg="TearDown network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" successfully" Aug 13 07:18:19.822917 containerd[1720]: time="2025-08-13T07:18:19.822568094Z" level=info msg="StopPodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" returns successfully" Aug 13 07:18:19.823327 containerd[1720]: time="2025-08-13T07:18:19.823238215Z" level=info msg="RemovePodSandbox for \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\"" Aug 13 07:18:19.823457 containerd[1720]: time="2025-08-13T07:18:19.823435521Z" level=info msg="Forcibly stopping sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\"" Aug 13 07:18:19.890500 kubelet[3215]: I0813 07:18:19.890448 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6d30009-e3c1-496f-8ea4-de2a0c63018b-calico-apiserver-certs\") pod \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\" (UID: \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\") " Aug 13 07:18:19.890677 kubelet[3215]: I0813 07:18:19.890523 3215 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whl7n\" (UniqueName: \"kubernetes.io/projected/b6d30009-e3c1-496f-8ea4-de2a0c63018b-kube-api-access-whl7n\") pod \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\" (UID: \"b6d30009-e3c1-496f-8ea4-de2a0c63018b\") " Aug 13 07:18:19.891756 kubelet[3215]: I0813 07:18:19.891060 3215 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d895fcd6-d479-4f4e-87f8-3b6aee927688-calico-apiserver-certs\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:18:19.891756 kubelet[3215]: I0813 07:18:19.891086 3215 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pr6sj\" (UniqueName: \"kubernetes.io/projected/d895fcd6-d479-4f4e-87f8-3b6aee927688-kube-api-access-pr6sj\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:18:19.895365 kubelet[3215]: I0813 07:18:19.895316 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d30009-e3c1-496f-8ea4-de2a0c63018b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "b6d30009-e3c1-496f-8ea4-de2a0c63018b" (UID: "b6d30009-e3c1-496f-8ea4-de2a0c63018b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 07:18:19.897662 kubelet[3215]: I0813 07:18:19.897625 3215 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d30009-e3c1-496f-8ea4-de2a0c63018b-kube-api-access-whl7n" (OuterVolumeSpecName: "kube-api-access-whl7n") pod "b6d30009-e3c1-496f-8ea4-de2a0c63018b" (UID: "b6d30009-e3c1-496f-8ea4-de2a0c63018b"). InnerVolumeSpecName "kube-api-access-whl7n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.891 [WARNING][6722] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bcaff83-98f1-4f1e-9ec2-0de878c93569", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 17, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-7346cb15f0", ContainerID:"59312dc6b40df9edfc0754734f4e35913850deabffa9fa0395186756c00c4f10", Pod:"coredns-668d6bf9bc-dfzv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9ef0e554d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.891 [INFO][6722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.891 [INFO][6722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" iface="eth0" netns="" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.891 [INFO][6722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.891 [INFO][6722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.926 [INFO][6732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.926 [INFO][6732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.927 [INFO][6732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.935 [WARNING][6732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.936 [INFO][6732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" HandleID="k8s-pod-network.23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Workload="ci--4081.3.5--a--7346cb15f0-k8s-coredns--668d6bf9bc--dfzv4-eth0" Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.937 [INFO][6732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:18:19.942027 containerd[1720]: 2025-08-13 07:18:19.940 [INFO][6722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1" Aug 13 07:18:19.942784 containerd[1720]: time="2025-08-13T07:18:19.942070387Z" level=info msg="TearDown network for sandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" successfully" Aug 13 07:18:19.955301 containerd[1720]: time="2025-08-13T07:18:19.953385937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:18:19.955301 containerd[1720]: time="2025-08-13T07:18:19.953486140Z" level=info msg="RemovePodSandbox \"23c0d84b577b0e7437175cfa95e9e822d4996bc9c25c5c6e967092863ba656d1\" returns successfully" Aug 13 07:18:19.992395 kubelet[3215]: I0813 07:18:19.992284 3215 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6d30009-e3c1-496f-8ea4-de2a0c63018b-calico-apiserver-certs\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:18:19.992395 kubelet[3215]: I0813 07:18:19.992317 3215 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-whl7n\" (UniqueName: \"kubernetes.io/projected/b6d30009-e3c1-496f-8ea4-de2a0c63018b-kube-api-access-whl7n\") on node \"ci-4081.3.5-a-7346cb15f0\" DevicePath \"\"" Aug 13 07:18:20.042083 systemd[1]: Removed slice kubepods-besteffort-podd895fcd6_d479_4f4e_87f8_3b6aee927688.slice - libcontainer container kubepods-besteffort-podd895fcd6_d479_4f4e_87f8_3b6aee927688.slice. Aug 13 07:18:20.042481 systemd[1]: kubepods-besteffort-podd895fcd6_d479_4f4e_87f8_3b6aee927688.slice: Consumed 1.678s CPU time. Aug 13 07:18:20.173440 systemd[1]: run-netns-cni\x2d1f4bb425\x2d8070\x2daf28\x2d91eb\x2dc240ea2350f6.mount: Deactivated successfully. Aug 13 07:18:20.173580 systemd[1]: var-lib-kubelet-pods-d895fcd6\x2dd479\x2d4f4e\x2d87f8\x2d3b6aee927688-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpr6sj.mount: Deactivated successfully. Aug 13 07:18:20.173669 systemd[1]: var-lib-kubelet-pods-d895fcd6\x2dd479\x2d4f4e\x2d87f8\x2d3b6aee927688-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 07:18:20.173748 systemd[1]: var-lib-kubelet-pods-b6d30009\x2de3c1\x2d496f\x2d8ea4\x2dde2a0c63018b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwhl7n.mount: Deactivated successfully. Aug 13 07:18:20.173827 systemd[1]: var-lib-kubelet-pods-b6d30009\x2de3c1\x2d496f\x2d8ea4\x2dde2a0c63018b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 07:18:20.255864 kubelet[3215]: I0813 07:18:20.255744 3215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d895fcd6-d479-4f4e-87f8-3b6aee927688" path="/var/lib/kubelet/pods/d895fcd6-d479-4f4e-87f8-3b6aee927688/volumes" Aug 13 07:18:20.260721 systemd[1]: Removed slice kubepods-besteffort-podb6d30009_e3c1_496f_8ea4_de2a0c63018b.slice - libcontainer container kubepods-besteffort-podb6d30009_e3c1_496f_8ea4_de2a0c63018b.slice. Aug 13 07:18:21.591554 containerd[1720]: time="2025-08-13T07:18:21.591501875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:21.593997 containerd[1720]: time="2025-08-13T07:18:21.593930253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:18:21.598083 containerd[1720]: time="2025-08-13T07:18:21.598011584Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:21.605476 containerd[1720]: time="2025-08-13T07:18:21.605163614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:21.606342 containerd[1720]: time="2025-08-13T07:18:21.606299651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.768362784s" Aug 13 07:18:21.606463 containerd[1720]: time="2025-08-13T07:18:21.606341752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:18:21.609292 containerd[1720]: time="2025-08-13T07:18:21.608536922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:18:21.631289 containerd[1720]: time="2025-08-13T07:18:21.631093847Z" level=info msg="CreateContainer within sandbox \"572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:18:21.665336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322122834.mount: Deactivated successfully. Aug 13 07:18:21.669081 containerd[1720]: time="2025-08-13T07:18:21.669041467Z" level=info msg="CreateContainer within sandbox \"572122f4b8890ea0407038f6603e51935fef87ea322fbcd2bc298f07c70ec3c8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9\"" Aug 13 07:18:21.670120 containerd[1720]: time="2025-08-13T07:18:21.669993397Z" level=info msg="StartContainer for \"ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9\"" Aug 13 07:18:21.701411 systemd[1]: Started cri-containerd-ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9.scope - libcontainer container ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9. Aug 13 07:18:21.751294 containerd[1720]: time="2025-08-13T07:18:21.751236508Z" level=info msg="StartContainer for \"ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9\" returns successfully" Aug 13 07:18:21.792664 kubelet[3215]: I0813 07:18:21.791847 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65d98d4c87-tmh2g" podStartSLOduration=30.858496298 podStartE2EDuration="48.791825612s" podCreationTimestamp="2025-08-13 07:17:33 +0000 UTC" firstStartedPulling="2025-08-13 07:18:03.67430758 +0000 UTC m=+49.527161432" lastFinishedPulling="2025-08-13 07:18:21.607636894 +0000 UTC m=+67.460490746" observedRunningTime="2025-08-13 07:18:21.789948352 +0000 UTC m=+67.642802204" watchObservedRunningTime="2025-08-13 07:18:21.791825612 +0000 UTC m=+67.644679564" Aug 13 07:18:22.255864 kubelet[3215]: I0813 07:18:22.255816 3215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d30009-e3c1-496f-8ea4-de2a0c63018b" path="/var/lib/kubelet/pods/b6d30009-e3c1-496f-8ea4-de2a0c63018b/volumes" Aug 13 07:18:22.805848 systemd[1]: run-containerd-runc-k8s.io-ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9-runc.hGLRfH.mount: Deactivated successfully. Aug 13 07:18:23.109334 containerd[1720]: time="2025-08-13T07:18:23.109178544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:23.112366 containerd[1720]: time="2025-08-13T07:18:23.112318545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:18:23.118948 containerd[1720]: time="2025-08-13T07:18:23.118875956Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:23.123533 containerd[1720]: time="2025-08-13T07:18:23.123479204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:18:23.124307 containerd[1720]: time="2025-08-13T07:18:23.124098724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.5155226s" Aug 13 07:18:23.124307 containerd[1720]: time="2025-08-13T07:18:23.124140025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:18:23.127301 containerd[1720]: time="2025-08-13T07:18:23.126865513Z" level=info msg="CreateContainer within sandbox \"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:18:23.169592 containerd[1720]: time="2025-08-13T07:18:23.169495482Z" level=info msg="CreateContainer within sandbox \"6a2ace4cd9fe71c3e406592d29ac2b3cb95399cfa4448f0c0d59111b51f17d36\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"32cd1af7a72067e6c61ceadf15d4d33d8607b6916ea65ee55794a3e3d11f8c60\"" Aug 13 07:18:23.170293 containerd[1720]: time="2025-08-13T07:18:23.170243106Z" level=info msg="StartContainer for \"32cd1af7a72067e6c61ceadf15d4d33d8607b6916ea65ee55794a3e3d11f8c60\"" Aug 13 07:18:23.208418 systemd[1]: Started cri-containerd-32cd1af7a72067e6c61ceadf15d4d33d8607b6916ea65ee55794a3e3d11f8c60.scope - libcontainer container 32cd1af7a72067e6c61ceadf15d4d33d8607b6916ea65ee55794a3e3d11f8c60. Aug 13 07:18:23.239833 containerd[1720]: time="2025-08-13T07:18:23.239720839Z" level=info msg="StartContainer for \"32cd1af7a72067e6c61ceadf15d4d33d8607b6916ea65ee55794a3e3d11f8c60\" returns successfully" Aug 13 07:18:23.352880 kubelet[3215]: I0813 07:18:23.352824 3215 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:18:23.352880 kubelet[3215]: I0813 07:18:23.352867 3215 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:18:23.789549 kubelet[3215]: I0813 07:18:23.789484 3215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kngq7" podStartSLOduration=31.019732892 podStartE2EDuration="50.789463105s" podCreationTimestamp="2025-08-13 07:17:33 +0000 UTC" firstStartedPulling="2025-08-13 07:18:03.355517348 +0000 UTC m=+49.208371200" lastFinishedPulling="2025-08-13 07:18:23.125247461 +0000 UTC m=+68.978101413" observedRunningTime="2025-08-13 07:18:23.788871986 +0000 UTC m=+69.641725838" watchObservedRunningTime="2025-08-13 07:18:23.789463105 +0000 UTC m=+69.642316957" Aug 13 07:18:28.605904 systemd[1]: run-containerd-runc-k8s.io-6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120-runc.cWvXPv.mount: Deactivated successfully. Aug 13 07:18:43.737958 systemd[1]: run-containerd-runc-k8s.io-e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3-runc.ZtFAsI.mount: Deactivated successfully. Aug 13 07:18:50.041799 systemd[1]: run-containerd-runc-k8s.io-ec9848a593b793a7ea734d47098436c5b7e3c82ab7d38746aa53f30481c49fa9-runc.LXbzVD.mount: Deactivated successfully. Aug 13 07:19:03.176718 systemd[1]: Started sshd@7-10.200.4.46:22-10.200.16.10:42416.service - OpenSSH per-connection server daemon (10.200.16.10:42416). Aug 13 07:19:03.790441 sshd[6998]: Accepted publickey for core from 10.200.16.10 port 42416 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:03.792008 sshd[6998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:03.801308 systemd-logind[1689]: New session 10 of user core. Aug 13 07:19:03.806431 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:19:04.352968 sshd[6998]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:04.358181 systemd[1]: sshd@7-10.200.4.46:22-10.200.16.10:42416.service: Deactivated successfully. Aug 13 07:19:04.368108 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:19:04.370218 systemd-logind[1689]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:19:04.372899 systemd-logind[1689]: Removed session 10. Aug 13 07:19:09.464377 systemd[1]: Started sshd@8-10.200.4.46:22-10.200.16.10:42426.service - OpenSSH per-connection server daemon (10.200.16.10:42426). Aug 13 07:19:10.063503 sshd[7034]: Accepted publickey for core from 10.200.16.10 port 42426 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:10.065520 sshd[7034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:10.078501 systemd-logind[1689]: New session 11 of user core. Aug 13 07:19:10.086410 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:19:10.626515 sshd[7034]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:10.630472 systemd-logind[1689]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:19:10.631365 systemd[1]: sshd@8-10.200.4.46:22-10.200.16.10:42426.service: Deactivated successfully. Aug 13 07:19:10.635224 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:19:10.638913 systemd-logind[1689]: Removed session 11. Aug 13 07:19:15.738719 systemd[1]: Started sshd@9-10.200.4.46:22-10.200.16.10:53620.service - OpenSSH per-connection server daemon (10.200.16.10:53620). Aug 13 07:19:16.330286 sshd[7071]: Accepted publickey for core from 10.200.16.10 port 53620 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:16.331936 sshd[7071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:16.336943 systemd-logind[1689]: New session 12 of user core. Aug 13 07:19:16.344417 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:19:16.817364 sshd[7071]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:16.822762 systemd[1]: sshd@9-10.200.4.46:22-10.200.16.10:53620.service: Deactivated successfully. Aug 13 07:19:16.825077 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:19:16.826559 systemd-logind[1689]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:19:16.828979 systemd-logind[1689]: Removed session 12. Aug 13 07:19:16.926612 systemd[1]: Started sshd@10-10.200.4.46:22-10.200.16.10:53622.service - OpenSSH per-connection server daemon (10.200.16.10:53622). Aug 13 07:19:17.507487 sshd[7085]: Accepted publickey for core from 10.200.16.10 port 53622 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:17.510200 sshd[7085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:17.514154 systemd-logind[1689]: New session 13 of user core. Aug 13 07:19:17.517486 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:19:18.033674 sshd[7085]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:18.038813 systemd-logind[1689]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:19:18.039656 systemd[1]: sshd@10-10.200.4.46:22-10.200.16.10:53622.service: Deactivated successfully. Aug 13 07:19:18.041989 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:19:18.043801 systemd-logind[1689]: Removed session 13. Aug 13 07:19:18.147749 systemd[1]: Started sshd@11-10.200.4.46:22-10.200.16.10:53630.service - OpenSSH per-connection server daemon (10.200.16.10:53630). Aug 13 07:19:18.729332 sshd[7096]: Accepted publickey for core from 10.200.16.10 port 53630 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:18.732855 sshd[7096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:18.737311 systemd-logind[1689]: New session 14 of user core. Aug 13 07:19:18.745426 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:19:19.224464 sshd[7096]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:19.228731 systemd[1]: sshd@11-10.200.4.46:22-10.200.16.10:53630.service: Deactivated successfully. Aug 13 07:19:19.231124 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:19:19.232119 systemd-logind[1689]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:19:19.233148 systemd-logind[1689]: Removed session 14. Aug 13 07:19:19.956632 kubelet[3215]: I0813 07:19:19.956592 3215 scope.go:117] "RemoveContainer" containerID="41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc" Aug 13 07:19:19.957961 containerd[1720]: time="2025-08-13T07:19:19.957922825Z" level=info msg="RemoveContainer for \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\"" Aug 13 07:19:19.977646 containerd[1720]: time="2025-08-13T07:19:19.977600654Z" level=info msg="RemoveContainer for \"41c21da3f912bbd5e138ae53e6b9121cb3981579119589499cd93480be1161dc\" returns successfully" Aug 13 07:19:19.979235 containerd[1720]: time="2025-08-13T07:19:19.979207097Z" level=info msg="StopPodSandbox for \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\"" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.017 [WARNING][7124] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.018 [INFO][7124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.018 [INFO][7124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" iface="eth0" netns="" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.018 [INFO][7124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.018 [INFO][7124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.040 [INFO][7131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.040 [INFO][7131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.040 [INFO][7131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.047 [WARNING][7131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.047 [INFO][7131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.049 [INFO][7131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:20.051565 containerd[1720]: 2025-08-13 07:19:20.050 [INFO][7124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.052208 containerd[1720]: time="2025-08-13T07:19:20.051631244Z" level=info msg="TearDown network for sandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" successfully" Aug 13 07:19:20.052208 containerd[1720]: time="2025-08-13T07:19:20.051679645Z" level=info msg="StopPodSandbox for \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" returns successfully" Aug 13 07:19:20.052532 containerd[1720]: time="2025-08-13T07:19:20.052499167Z" level=info msg="RemovePodSandbox for \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\"" Aug 13 07:19:20.052611 containerd[1720]: time="2025-08-13T07:19:20.052535368Z" level=info msg="Forcibly stopping sandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\"" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.089 [WARNING][7145] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.090 [INFO][7145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.090 [INFO][7145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" iface="eth0" netns="" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.090 [INFO][7145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.090 [INFO][7145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.112 [INFO][7153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.112 [INFO][7153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.112 [INFO][7153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.118 [WARNING][7153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.118 [INFO][7153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" HandleID="k8s-pod-network.d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--rqqwz-eth0" Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.121 [INFO][7153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:20.125780 containerd[1720]: 2025-08-13 07:19:20.122 [INFO][7145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c" Aug 13 07:19:20.125780 containerd[1720]: time="2025-08-13T07:19:20.124171893Z" level=info msg="TearDown network for sandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" successfully" Aug 13 07:19:20.134358 containerd[1720]: time="2025-08-13T07:19:20.134309666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:20.134510 containerd[1720]: time="2025-08-13T07:19:20.134407568Z" level=info msg="RemovePodSandbox \"d9195e64799cf719dc7422b79e74dffecbae0046cb2ce3039ef43c3e816c4b8c\" returns successfully" Aug 13 07:19:20.134974 containerd[1720]: time="2025-08-13T07:19:20.134938783Z" level=info msg="StopPodSandbox for \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\"" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.167 [WARNING][7167] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.167 [INFO][7167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.168 [INFO][7167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" iface="eth0" netns="" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.168 [INFO][7167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.168 [INFO][7167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.190 [INFO][7174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.190 [INFO][7174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.190 [INFO][7174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.196 [WARNING][7174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.196 [INFO][7174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.197 [INFO][7174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:20.200342 containerd[1720]: 2025-08-13 07:19:20.199 [INFO][7167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.200902 containerd[1720]: time="2025-08-13T07:19:20.200392842Z" level=info msg="TearDown network for sandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" successfully" Aug 13 07:19:20.200902 containerd[1720]: time="2025-08-13T07:19:20.200422943Z" level=info msg="StopPodSandbox for \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" returns successfully" Aug 13 07:19:20.201227 containerd[1720]: time="2025-08-13T07:19:20.201178963Z" level=info msg="RemovePodSandbox for \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\"" Aug 13 07:19:20.201346 containerd[1720]: time="2025-08-13T07:19:20.201231664Z" level=info msg="Forcibly stopping sandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\"" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.234 [WARNING][7188] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" WorkloadEndpoint="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.234 [INFO][7188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.234 [INFO][7188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" iface="eth0" netns="" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.234 [INFO][7188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.234 [INFO][7188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.256 [INFO][7196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.257 [INFO][7196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.257 [INFO][7196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.264 [WARNING][7196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.264 [INFO][7196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" HandleID="k8s-pod-network.dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Workload="ci--4081.3.5--a--7346cb15f0-k8s-calico--apiserver--5cdd967ff--7cwjt-eth0" Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.267 [INFO][7196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:19:20.271089 containerd[1720]: 2025-08-13 07:19:20.269 [INFO][7188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f" Aug 13 07:19:20.271089 containerd[1720]: time="2025-08-13T07:19:20.270842135Z" level=info msg="TearDown network for sandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" successfully" Aug 13 07:19:20.284643 containerd[1720]: time="2025-08-13T07:19:20.284601505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:19:20.284788 containerd[1720]: time="2025-08-13T07:19:20.284678707Z" level=info msg="RemovePodSandbox \"dfbd72f5408bf9a62f500227e50dbab43efac9dfde887de79b21d4dbe4c50f4f\" returns successfully" Aug 13 07:19:24.330127 systemd[1]: Started sshd@12-10.200.4.46:22-10.200.16.10:55366.service - OpenSSH per-connection server daemon (10.200.16.10:55366). Aug 13 07:19:24.919171 sshd[7226]: Accepted publickey for core from 10.200.16.10 port 55366 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:24.920787 sshd[7226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:24.929170 systemd-logind[1689]: New session 15 of user core. Aug 13 07:19:24.935591 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:19:25.412185 sshd[7226]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:25.417554 systemd[1]: sshd@12-10.200.4.46:22-10.200.16.10:55366.service: Deactivated successfully. Aug 13 07:19:25.420453 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:19:25.422425 systemd-logind[1689]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:19:25.423579 systemd-logind[1689]: Removed session 15. Aug 13 07:19:30.523583 systemd[1]: Started sshd@13-10.200.4.46:22-10.200.16.10:44640.service - OpenSSH per-connection server daemon (10.200.16.10:44640). Aug 13 07:19:31.118829 sshd[7275]: Accepted publickey for core from 10.200.16.10 port 44640 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:31.119547 sshd[7275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:31.124497 systemd-logind[1689]: New session 16 of user core. Aug 13 07:19:31.128450 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:19:31.623903 sshd[7275]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:31.627812 systemd-logind[1689]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:19:31.628476 systemd[1]: sshd@13-10.200.4.46:22-10.200.16.10:44640.service: Deactivated successfully. Aug 13 07:19:31.631174 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:19:31.632239 systemd-logind[1689]: Removed session 16. Aug 13 07:19:36.739562 systemd[1]: Started sshd@14-10.200.4.46:22-10.200.16.10:44646.service - OpenSSH per-connection server daemon (10.200.16.10:44646). Aug 13 07:19:37.349849 sshd[7295]: Accepted publickey for core from 10.200.16.10 port 44646 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:37.354312 sshd[7295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:37.360736 systemd-logind[1689]: New session 17 of user core. Aug 13 07:19:37.365447 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:19:37.839215 sshd[7295]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:37.843375 systemd-logind[1689]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:19:37.844841 systemd[1]: sshd@14-10.200.4.46:22-10.200.16.10:44646.service: Deactivated successfully. Aug 13 07:19:37.851252 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:19:37.852998 systemd-logind[1689]: Removed session 17. Aug 13 07:19:42.951354 systemd[1]: Started sshd@15-10.200.4.46:22-10.200.16.10:52288.service - OpenSSH per-connection server daemon (10.200.16.10:52288). Aug 13 07:19:43.544590 sshd[7310]: Accepted publickey for core from 10.200.16.10 port 52288 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:43.546935 sshd[7310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:43.554790 systemd-logind[1689]: New session 18 of user core. Aug 13 07:19:43.561815 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:19:43.715701 systemd[1]: run-containerd-runc-k8s.io-e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3-runc.RhBiSp.mount: Deactivated successfully. Aug 13 07:19:44.063363 sshd[7310]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:44.067336 systemd[1]: sshd@15-10.200.4.46:22-10.200.16.10:52288.service: Deactivated successfully. Aug 13 07:19:44.070993 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:19:44.072666 systemd-logind[1689]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:19:44.074078 systemd-logind[1689]: Removed session 18. Aug 13 07:19:44.177617 systemd[1]: Started sshd@16-10.200.4.46:22-10.200.16.10:52302.service - OpenSSH per-connection server daemon (10.200.16.10:52302). Aug 13 07:19:44.776333 sshd[7342]: Accepted publickey for core from 10.200.16.10 port 52302 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:44.778275 sshd[7342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:44.788116 systemd-logind[1689]: New session 19 of user core. Aug 13 07:19:44.795694 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:19:45.330538 sshd[7342]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:45.336716 systemd[1]: sshd@16-10.200.4.46:22-10.200.16.10:52302.service: Deactivated successfully. Aug 13 07:19:45.340092 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:19:45.342855 systemd-logind[1689]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:19:45.344801 systemd-logind[1689]: Removed session 19. Aug 13 07:19:45.443676 systemd[1]: Started sshd@17-10.200.4.46:22-10.200.16.10:52314.service - OpenSSH per-connection server daemon (10.200.16.10:52314). Aug 13 07:19:46.034931 sshd[7353]: Accepted publickey for core from 10.200.16.10 port 52314 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:46.037094 sshd[7353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:46.042908 systemd-logind[1689]: New session 20 of user core. Aug 13 07:19:46.051850 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:19:47.189158 sshd[7353]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:47.192734 systemd[1]: sshd@17-10.200.4.46:22-10.200.16.10:52314.service: Deactivated successfully. Aug 13 07:19:47.195061 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:19:47.197886 systemd-logind[1689]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:19:47.198955 systemd-logind[1689]: Removed session 20. Aug 13 07:19:47.302578 systemd[1]: Started sshd@18-10.200.4.46:22-10.200.16.10:52326.service - OpenSSH per-connection server daemon (10.200.16.10:52326). Aug 13 07:19:47.890131 sshd[7371]: Accepted publickey for core from 10.200.16.10 port 52326 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:47.891898 sshd[7371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:47.896143 systemd-logind[1689]: New session 21 of user core. Aug 13 07:19:47.901424 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:19:48.510560 sshd[7371]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:48.514546 systemd-logind[1689]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:19:48.515593 systemd[1]: sshd@18-10.200.4.46:22-10.200.16.10:52326.service: Deactivated successfully. Aug 13 07:19:48.517868 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:19:48.518940 systemd-logind[1689]: Removed session 21. Aug 13 07:19:48.619927 systemd[1]: Started sshd@19-10.200.4.46:22-10.200.16.10:52342.service - OpenSSH per-connection server daemon (10.200.16.10:52342). Aug 13 07:19:49.214870 sshd[7382]: Accepted publickey for core from 10.200.16.10 port 52342 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:49.215609 sshd[7382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:49.220474 systemd-logind[1689]: New session 22 of user core. Aug 13 07:19:49.226467 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:19:49.705502 sshd[7382]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:49.710370 systemd[1]: sshd@19-10.200.4.46:22-10.200.16.10:52342.service: Deactivated successfully. Aug 13 07:19:49.712915 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:19:49.714433 systemd-logind[1689]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:19:49.715843 systemd-logind[1689]: Removed session 22. Aug 13 07:19:54.818803 systemd[1]: Started sshd@20-10.200.4.46:22-10.200.16.10:43534.service - OpenSSH per-connection server daemon (10.200.16.10:43534). Aug 13 07:19:55.407815 sshd[7435]: Accepted publickey for core from 10.200.16.10 port 43534 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:19:55.410080 sshd[7435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:19:55.416312 systemd-logind[1689]: New session 23 of user core. Aug 13 07:19:55.419464 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:19:55.901553 sshd[7435]: pam_unix(sshd:session): session closed for user core Aug 13 07:19:55.908744 systemd[1]: sshd@20-10.200.4.46:22-10.200.16.10:43534.service: Deactivated successfully. Aug 13 07:19:55.909031 systemd-logind[1689]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:19:55.913626 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:19:55.918229 systemd-logind[1689]: Removed session 23. Aug 13 07:19:58.621882 systemd[1]: run-containerd-runc-k8s.io-6b4976b5e6253d7bcec5a8ddb7ca515c33849552d2250f68ad7f592523937120-runc.zTfxwA.mount: Deactivated successfully. Aug 13 07:20:01.017388 systemd[1]: Started sshd@21-10.200.4.46:22-10.200.16.10:52994.service - OpenSSH per-connection server daemon (10.200.16.10:52994). Aug 13 07:20:01.622668 sshd[7470]: Accepted publickey for core from 10.200.16.10 port 52994 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:01.624435 sshd[7470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:01.629755 systemd-logind[1689]: New session 24 of user core. Aug 13 07:20:01.635609 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:20:02.112781 sshd[7470]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:02.116971 systemd[1]: sshd@21-10.200.4.46:22-10.200.16.10:52994.service: Deactivated successfully. Aug 13 07:20:02.119648 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:20:02.120847 systemd-logind[1689]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:20:02.122308 systemd-logind[1689]: Removed session 24. Aug 13 07:20:04.332055 systemd[1]: run-containerd-runc-k8s.io-e7d37d956f6e1f85a0a2e123e5c343cfc57eaac790bb3c8c4c9183ac166cf1a3-runc.HsdgOd.mount: Deactivated successfully. Aug 13 07:20:07.222585 systemd[1]: Started sshd@22-10.200.4.46:22-10.200.16.10:53002.service - OpenSSH per-connection server daemon (10.200.16.10:53002). Aug 13 07:20:07.806884 sshd[7503]: Accepted publickey for core from 10.200.16.10 port 53002 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:07.807696 sshd[7503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:07.819901 systemd-logind[1689]: New session 25 of user core. Aug 13 07:20:07.824491 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:20:08.339540 sshd[7503]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:08.345898 systemd[1]: sshd@22-10.200.4.46:22-10.200.16.10:53002.service: Deactivated successfully. Aug 13 07:20:08.346402 systemd-logind[1689]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:20:08.352020 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:20:08.358885 systemd-logind[1689]: Removed session 25. Aug 13 07:20:13.441632 systemd[1]: Started sshd@23-10.200.4.46:22-10.200.16.10:47278.service - OpenSSH per-connection server daemon (10.200.16.10:47278). Aug 13 07:20:14.029389 sshd[7516]: Accepted publickey for core from 10.200.16.10 port 47278 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:14.031013 sshd[7516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:14.035321 systemd-logind[1689]: New session 26 of user core. Aug 13 07:20:14.041434 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 07:20:14.580140 sshd[7516]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:14.586033 systemd-logind[1689]: Session 26 logged out. Waiting for processes to exit. Aug 13 07:20:14.586331 systemd[1]: sshd@23-10.200.4.46:22-10.200.16.10:47278.service: Deactivated successfully. Aug 13 07:20:14.590226 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 07:20:14.593800 systemd-logind[1689]: Removed session 26. Aug 13 07:20:19.689584 systemd[1]: Started sshd@24-10.200.4.46:22-10.200.16.10:47294.service - OpenSSH per-connection server daemon (10.200.16.10:47294). Aug 13 07:20:20.280316 sshd[7553]: Accepted publickey for core from 10.200.16.10 port 47294 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:20.281944 sshd[7553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:20.292275 systemd-logind[1689]: New session 27 of user core. Aug 13 07:20:20.296041 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 07:20:20.764156 sshd[7553]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:20.768268 systemd-logind[1689]: Session 27 logged out. Waiting for processes to exit. Aug 13 07:20:20.768988 systemd[1]: sshd@24-10.200.4.46:22-10.200.16.10:47294.service: Deactivated successfully. Aug 13 07:20:20.771673 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 07:20:20.772724 systemd-logind[1689]: Removed session 27. Aug 13 07:20:25.876761 systemd[1]: Started sshd@25-10.200.4.46:22-10.200.16.10:50932.service - OpenSSH per-connection server daemon (10.200.16.10:50932). Aug 13 07:20:26.472457 sshd[7586]: Accepted publickey for core from 10.200.16.10 port 50932 ssh2: RSA SHA256:YIOU27DDNg9nWy3/pRelkm3k9PS6yW5AASBxuPmap5E Aug 13 07:20:26.474794 sshd[7586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:20:26.480645 systemd-logind[1689]: New session 28 of user core. Aug 13 07:20:26.489418 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 07:20:26.968317 sshd[7586]: pam_unix(sshd:session): session closed for user core Aug 13 07:20:26.972814 systemd[1]: sshd@25-10.200.4.46:22-10.200.16.10:50932.service: Deactivated successfully. Aug 13 07:20:26.975405 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 07:20:26.976864 systemd-logind[1689]: Session 28 logged out. Waiting for processes to exit. Aug 13 07:20:26.977948 systemd-logind[1689]: Removed session 28.