Sep 12 17:37:59.141919 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:37:59.141956 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:37:59.141969 kernel: BIOS-provided physical RAM map: Sep 12 17:37:59.141980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:37:59.141989 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 12 17:37:59.141999 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 12 17:37:59.142011 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 12 17:37:59.142026 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 12 17:37:59.142052 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 12 17:37:59.142064 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 12 17:37:59.142075 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 12 17:37:59.142086 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 12 17:37:59.142097 kernel: printk: bootconsole [earlyser0] enabled Sep 12 17:37:59.142108 kernel: NX (Execute Disable) protection: active Sep 12 17:37:59.142125 kernel: APIC: Static calls initialized Sep 12 17:37:59.142138 kernel: efi: EFI v2.7 by Microsoft Sep 12 17:37:59.142151 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 12 17:37:59.142163 kernel: SMBIOS 3.1.0 present. Sep 12 17:37:59.142176 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 12 17:37:59.142188 kernel: Hypervisor detected: Microsoft Hyper-V Sep 12 17:37:59.142201 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 12 17:37:59.142214 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Sep 12 17:37:59.142224 kernel: Hyper-V: Nested features: 0x1e0101 Sep 12 17:37:59.142237 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 12 17:37:59.142252 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 12 17:37:59.142264 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:37:59.142276 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:37:59.142289 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 12 17:37:59.142300 kernel: tsc: Detected 2593.907 MHz processor Sep 12 17:37:59.142312 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:37:59.142323 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:37:59.142335 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 12 17:37:59.142347 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:37:59.142362 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:37:59.142389 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 12 17:37:59.142401 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 12 17:37:59.142412 kernel: Using GB pages for direct mapping Sep 12 17:37:59.142422 kernel: Secure boot disabled Sep 12 17:37:59.142434 kernel: ACPI: Early table checksum verification disabled Sep 12 17:37:59.142447 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 12 17:37:59.142465 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142484 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142497 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 12 17:37:59.142509 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 12 17:37:59.142522 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142536 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142551 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142568 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142582 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142597 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142611 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142625 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 12 17:37:59.142639 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 12 17:37:59.142653 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 12 17:37:59.142667 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 12 17:37:59.142686 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 12 17:37:59.142701 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 12 17:37:59.142715 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 12 17:37:59.142728 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 12 17:37:59.142741 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 12 17:37:59.142755 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 12 17:37:59.142769 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:37:59.142783 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:37:59.142796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 12 17:37:59.142813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 12 17:37:59.142826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 12 17:37:59.142840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 12 17:37:59.142853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 12 17:37:59.142866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 12 17:37:59.142880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 12 17:37:59.142893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 12 17:37:59.142906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 12 17:37:59.142920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 12 17:37:59.142936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 12 17:37:59.142949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 12 17:37:59.142963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 12 17:37:59.142976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 12 17:37:59.142990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 12 17:37:59.143003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 12 17:37:59.143016 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 12 17:37:59.144254 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 12 17:37:59.144278 kernel: Zone ranges: Sep 12 17:37:59.151406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:37:59.151428 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:37:59.151443 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:37:59.151458 kernel: Movable zone start for each node Sep 12 17:37:59.151473 kernel: Early memory node ranges Sep 12 17:37:59.151487 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:37:59.151503 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 12 17:37:59.151517 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 12 17:37:59.151532 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:37:59.151552 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 12 17:37:59.151566 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:37:59.151580 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:37:59.151595 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 12 17:37:59.151609 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 12 17:37:59.151623 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 12 17:37:59.151637 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:37:59.151652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:37:59.151667 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:37:59.151685 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 12 17:37:59.151699 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:37:59.151713 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 12 17:37:59.151728 kernel: Booting paravirtualized kernel on Hyper-V Sep 12 17:37:59.151742 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:37:59.151757 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:37:59.151771 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:37:59.151785 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:37:59.151799 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:37:59.151816 kernel: Hyper-V: PV spinlocks enabled Sep 12 17:37:59.151830 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:37:59.151846 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:37:59.151861 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:37:59.151875 kernel: random: crng init done Sep 12 17:37:59.151889 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:37:59.151904 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:37:59.151918 kernel: Fallback order for Node 0: 0 Sep 12 17:37:59.151936 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 12 17:37:59.151961 kernel: Policy zone: Normal Sep 12 17:37:59.151979 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:37:59.151994 kernel: software IO TLB: area num 2. Sep 12 17:37:59.152010 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 310124K reserved, 0K cma-reserved) Sep 12 17:37:59.152026 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:37:59.152057 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:37:59.152073 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:37:59.152088 kernel: Dynamic Preempt: voluntary Sep 12 17:37:59.152103 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:37:59.152117 kernel: rcu: RCU event tracing is enabled. Sep 12 17:37:59.152134 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:37:59.152147 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:37:59.152161 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:37:59.152175 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:37:59.152188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:37:59.152204 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:37:59.152218 kernel: Using NULL legacy PIC Sep 12 17:37:59.152231 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 12 17:37:59.152245 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:37:59.152259 kernel: Console: colour dummy device 80x25 Sep 12 17:37:59.152272 kernel: printk: console [tty1] enabled Sep 12 17:37:59.152285 kernel: printk: console [ttyS0] enabled Sep 12 17:37:59.152298 kernel: printk: bootconsole [earlyser0] disabled Sep 12 17:37:59.152312 kernel: ACPI: Core revision 20230628 Sep 12 17:37:59.152325 kernel: Failed to register legacy timer interrupt Sep 12 17:37:59.152342 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:37:59.152356 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:37:59.152369 kernel: Hyper-V: Using IPI hypercalls Sep 12 17:37:59.152383 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 12 17:37:59.152397 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 12 17:37:59.152410 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 12 17:37:59.152424 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 12 17:37:59.152438 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 12 17:37:59.152452 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 12 17:37:59.152469 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Sep 12 17:37:59.152483 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:37:59.152496 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:37:59.152510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:37:59.152524 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:37:59.152538 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:37:59.152552 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:37:59.152566 kernel: RETBleed: Vulnerable Sep 12 17:37:59.152579 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:37:59.152596 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:37:59.152610 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:37:59.152624 kernel: active return thunk: its_return_thunk Sep 12 17:37:59.152638 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:37:59.152651 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:37:59.152665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:37:59.152679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:37:59.152693 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:37:59.152707 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:37:59.152722 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:37:59.152736 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:37:59.152754 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 12 17:37:59.152767 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 12 17:37:59.152781 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 12 17:37:59.152795 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 12 17:37:59.152810 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:37:59.152825 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:37:59.152839 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:37:59.152853 kernel: landlock: Up and running. Sep 12 17:37:59.152867 kernel: SELinux: Initializing. Sep 12 17:37:59.152882 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.152896 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.152911 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:37:59.152928 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:37:59.152943 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:37:59.152958 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:37:59.152973 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:37:59.152987 kernel: signal: max sigframe size: 3632 Sep 12 17:37:59.153002 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:37:59.153016 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:37:59.153073 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:37:59.153088 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:37:59.153104 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:37:59.153122 kernel: .... node #0, CPUs: #1 Sep 12 17:37:59.153140 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 12 17:37:59.153158 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:37:59.153173 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:37:59.153189 kernel: smpboot: Max logical packages: 1 Sep 12 17:37:59.153205 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 12 17:37:59.153221 kernel: devtmpfs: initialized Sep 12 17:37:59.153240 kernel: x86/mm: Memory block size: 128MB Sep 12 17:37:59.153254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 12 17:37:59.153270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:37:59.153283 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:37:59.153298 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:37:59.153312 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:37:59.153326 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:37:59.153340 kernel: audit: type=2000 audit(1757698677.029:1): state=initialized audit_enabled=0 res=1 Sep 12 17:37:59.153354 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:37:59.153371 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:37:59.153385 kernel: cpuidle: using governor menu Sep 12 17:37:59.153399 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:37:59.153412 kernel: dca service started, version 1.12.1 Sep 12 17:37:59.153427 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 12 17:37:59.153443 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:37:59.153457 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:37:59.153472 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:37:59.153487 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:37:59.153504 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:37:59.153519 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:37:59.153533 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:37:59.153547 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:37:59.153561 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:37:59.153575 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:37:59.153590 kernel: ACPI: Interpreter enabled Sep 12 17:37:59.153604 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:37:59.153619 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:37:59.153637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:37:59.153651 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:37:59.153665 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 12 17:37:59.153677 kernel: iommu: Default domain type: Translated Sep 12 17:37:59.153690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:37:59.153704 kernel: efivars: Registered efivars operations Sep 12 17:37:59.153718 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:37:59.153733 kernel: PCI: System does not support PCI Sep 12 17:37:59.153747 kernel: vgaarb: loaded Sep 12 17:37:59.153767 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 12 17:37:59.153782 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:37:59.153795 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:37:59.153808 kernel: pnp: PnP ACPI init Sep 12 17:37:59.153822 kernel: pnp: PnP ACPI: found 3 devices Sep 12 17:37:59.153837 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:37:59.153851 kernel: NET: Registered PF_INET protocol family Sep 12 17:37:59.153863 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:37:59.153877 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:37:59.153895 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:37:59.153910 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:37:59.153925 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:37:59.153939 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:37:59.153953 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.153967 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.153981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:37:59.153996 kernel: NET: Registered PF_XDP protocol family Sep 12 17:37:59.154010 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:37:59.154027 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:37:59.154058 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 12 17:37:59.154073 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:37:59.154086 kernel: Initialise system trusted keyrings Sep 12 17:37:59.154101 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:37:59.154115 kernel: Key type asymmetric registered Sep 12 17:37:59.154130 kernel: Asymmetric key parser 'x509' registered Sep 12 17:37:59.154145 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:37:59.154159 kernel: io scheduler mq-deadline registered Sep 12 17:37:59.154178 kernel: io scheduler kyber registered Sep 12 17:37:59.154193 kernel: io scheduler bfq registered Sep 12 17:37:59.154208 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:37:59.154223 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:37:59.154238 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:37:59.154254 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:37:59.154269 kernel: i8042: PNP: No PS/2 controller found. Sep 12 17:37:59.154463 kernel: rtc_cmos 00:02: registered as rtc0 Sep 12 17:37:59.154595 kernel: rtc_cmos 00:02: setting system clock to 2025-09-12T17:37:58 UTC (1757698678) Sep 12 17:37:59.154715 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 12 17:37:59.154734 kernel: intel_pstate: CPU model not supported Sep 12 17:37:59.154749 kernel: efifb: probing for efifb Sep 12 17:37:59.154765 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:37:59.154780 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:37:59.154795 kernel: efifb: scrolling: redraw Sep 12 17:37:59.154810 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:37:59.154829 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:37:59.154844 kernel: fb0: EFI VGA frame buffer device Sep 12 17:37:59.154860 kernel: pstore: Using crash dump compression: deflate Sep 12 17:37:59.154875 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:37:59.154890 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:37:59.154906 kernel: Segment Routing with IPv6 Sep 12 17:37:59.154921 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:37:59.154936 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:37:59.154952 kernel: Key type dns_resolver registered Sep 12 17:37:59.154967 kernel: IPI shorthand broadcast: enabled Sep 12 17:37:59.154985 kernel: sched_clock: Marking stable (928110200, 52904000)->(1213359300, -232345100) Sep 12 17:37:59.155000 kernel: registered taskstats version 1 Sep 12 17:37:59.155016 kernel: Loading compiled-in X.509 certificates Sep 12 17:37:59.155087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:37:59.155101 kernel: Key type .fscrypt registered Sep 12 17:37:59.155114 kernel: Key type fscrypt-provisioning registered Sep 12 17:37:59.155128 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:37:59.155146 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:37:59.155165 kernel: ima: No architecture policies found Sep 12 17:37:59.155178 kernel: clk: Disabling unused clocks Sep 12 17:37:59.155191 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:37:59.155205 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:37:59.155219 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:37:59.155234 kernel: Run /init as init process Sep 12 17:37:59.155249 kernel: with arguments: Sep 12 17:37:59.155263 kernel: /init Sep 12 17:37:59.155277 kernel: with environment: Sep 12 17:37:59.155292 kernel: HOME=/ Sep 12 17:37:59.155311 kernel: TERM=linux Sep 12 17:37:59.155326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:37:59.155344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:37:59.155362 systemd[1]: Detected virtualization microsoft. Sep 12 17:37:59.155377 systemd[1]: Detected architecture x86-64. Sep 12 17:37:59.155392 systemd[1]: Running in initrd. Sep 12 17:37:59.155419 systemd[1]: No hostname configured, using default hostname. Sep 12 17:37:59.155438 systemd[1]: Hostname set to . Sep 12 17:37:59.155453 systemd[1]: Initializing machine ID from random generator. Sep 12 17:37:59.155467 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:37:59.155481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:59.155497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:59.155518 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:37:59.155534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:37:59.155550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:37:59.155569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:37:59.155586 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:37:59.155601 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:37:59.155617 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:59.155633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:59.155648 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:37:59.155662 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:37:59.155681 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:37:59.155696 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:37:59.155713 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:37:59.155728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:37:59.155742 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:37:59.155757 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:37:59.155773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:59.155788 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:59.155803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:59.155821 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:37:59.155838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:37:59.155854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:37:59.155870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:37:59.155886 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:37:59.155902 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:37:59.155918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:37:59.155963 systemd-journald[176]: Collecting audit messages is disabled. Sep 12 17:37:59.156003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:37:59.156017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:37:59.156049 systemd-journald[176]: Journal started Sep 12 17:37:59.156084 systemd-journald[176]: Runtime Journal (/run/log/journal/bfabc79ea2104116bac958d4c926f911) is 8.0M, max 158.8M, 150.8M free. Sep 12 17:37:59.140682 systemd-modules-load[177]: Inserted module 'overlay' Sep 12 17:37:59.167386 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:37:59.171288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:59.180327 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:37:59.185390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:59.199477 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:37:59.199508 kernel: Bridge firewalling registered Sep 12 17:37:59.199241 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 12 17:37:59.202208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:59.214210 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:37:59.228234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:37:59.241456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:37:59.249165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:37:59.257580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:37:59.266067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:59.268574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:37:59.269649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:59.289280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:37:59.301236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:37:59.307757 dracut-cmdline[206]: dracut-dracut-053 Sep 12 17:37:59.307757 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:37:59.313099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:37:59.357969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:59.377267 systemd-resolved[211]: Positive Trust Anchors: Sep 12 17:37:59.377285 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:37:59.377344 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:37:59.380968 systemd-resolved[211]: Defaulting to hostname 'linux'. Sep 12 17:37:59.382267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:37:59.387162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:59.423048 kernel: SCSI subsystem initialized Sep 12 17:37:59.433052 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:37:59.444054 kernel: iscsi: registered transport (tcp) Sep 12 17:37:59.466037 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:37:59.466117 kernel: QLogic iSCSI HBA Driver Sep 12 17:37:59.503293 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:37:59.515229 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:37:59.545616 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:37:59.545727 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:37:59.548913 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:37:59.590054 kernel: raid6: avx512x4 gen() 18410 MB/s Sep 12 17:37:59.610046 kernel: raid6: avx512x2 gen() 18221 MB/s Sep 12 17:37:59.629041 kernel: raid6: avx512x1 gen() 18345 MB/s Sep 12 17:37:59.648042 kernel: raid6: avx2x4 gen() 18328 MB/s Sep 12 17:37:59.667045 kernel: raid6: avx2x2 gen() 18342 MB/s Sep 12 17:37:59.694571 kernel: raid6: avx2x1 gen() 14094 MB/s Sep 12 17:37:59.694653 kernel: raid6: using algorithm avx512x4 gen() 18410 MB/s Sep 12 17:37:59.716195 kernel: raid6: .... xor() 7972 MB/s, rmw enabled Sep 12 17:37:59.716226 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:37:59.739057 kernel: xor: automatically using best checksumming function avx Sep 12 17:37:59.886061 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:37:59.895495 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:37:59.903304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:59.914948 systemd-udevd[394]: Using default interface naming scheme 'v255'. Sep 12 17:37:59.918879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:59.940208 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:37:59.953421 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 12 17:37:59.981222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:37:59.990163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:00.033546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:00.045247 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:38:00.068848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:00.080009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:00.089738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:00.098355 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:00.110194 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:38:00.140519 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:38:00.153634 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:00.163975 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:38:00.164058 kernel: AES CTR mode by8 optimization enabled Sep 12 17:38:00.168298 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:00.170771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:00.180045 kernel: hv_vmbus: Vmbus version:5.2 Sep 12 17:38:00.182684 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:00.208182 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:38:00.188222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:00.188521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:00.192184 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:00.212930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:00.232618 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:38:00.252939 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:38:00.253012 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:38:00.262063 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:38:00.268449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:00.279049 kernel: scsi host1: storvsc_host_t Sep 12 17:38:00.279117 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:38:00.283365 kernel: scsi host0: storvsc_host_t Sep 12 17:38:00.290617 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:38:00.290680 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:38:00.290917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:00.301907 kernel: PTP clock support registered Sep 12 17:38:00.301936 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:38:00.310204 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:38:00.310248 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:38:00.311163 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:38:00.315561 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:38:00.315603 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:38:00.827520 systemd-resolved[211]: Clock change detected. Flushing caches. Sep 12 17:38:00.835853 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:38:00.844612 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:00.858818 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:38:00.858879 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:38:00.870633 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:38:00.870942 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:38:00.872905 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:38:00.888769 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:38:00.889192 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:38:00.891847 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:38:00.894513 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:38:00.894657 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:38:00.906775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:00.909764 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:38:01.011340 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: VF slot 1 added Sep 12 17:38:01.021927 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:38:01.022001 kernel: hv_pci 5a0d14ca-6432-4393-a3b4-171f5e6fc152: PCI VMBus probing: Using version 0x10004 Sep 12 17:38:01.029876 kernel: hv_pci 5a0d14ca-6432-4393-a3b4-171f5e6fc152: PCI host bridge to bus 6432:00 Sep 12 17:38:01.030170 kernel: pci_bus 6432:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 12 17:38:01.033156 kernel: pci_bus 6432:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:38:01.038134 kernel: pci 6432:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 12 17:38:01.042874 kernel: pci 6432:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 12 17:38:01.045869 kernel: pci 6432:00:02.0: enabling Extended Tags Sep 12 17:38:01.060852 kernel: pci 6432:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6432:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 12 17:38:01.067980 kernel: pci_bus 6432:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:38:01.068242 kernel: pci 6432:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 12 17:38:01.240634 kernel: mlx5_core 6432:00:02.0: enabling device (0000 -> 0002) Sep 12 17:38:01.245769 kernel: mlx5_core 6432:00:02.0: firmware version: 14.30.5000 Sep 12 17:38:01.398926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:38:01.439762 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (447) Sep 12 17:38:01.450426 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (464) Sep 12 17:38:01.466436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:38:01.479765 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: VF registering: eth1 Sep 12 17:38:01.483772 kernel: mlx5_core 6432:00:02.0 eth1: joined to eth0 Sep 12 17:38:01.485428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:38:01.497153 kernel: mlx5_core 6432:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 17:38:01.498202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:38:01.508232 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:38:01.522820 kernel: mlx5_core 6432:00:02.0 enP25650s1: renamed from eth1 Sep 12 17:38:01.525905 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:38:01.546771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:01.553760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:02.568761 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:02.570408 disk-uuid[599]: The operation has completed successfully. Sep 12 17:38:02.675649 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:38:02.675779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:38:02.689931 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:38:02.695004 sh[712]: Success Sep 12 17:38:02.723766 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:38:03.034435 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:38:03.047881 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:38:03.054179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:38:03.094770 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:38:03.094834 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:03.100713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:38:03.106094 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:38:03.108775 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:38:03.489161 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:38:03.495800 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:38:03.506933 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:38:03.513457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:38:03.548304 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:03.548396 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:03.551231 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:38:03.601753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:03.611614 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:38:03.620918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:03.629975 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:03.627678 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:38:03.641139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:38:03.651014 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:38:03.668413 systemd-networkd[890]: lo: Link UP Sep 12 17:38:03.668423 systemd-networkd[890]: lo: Gained carrier Sep 12 17:38:03.670618 systemd-networkd[890]: Enumeration completed Sep 12 17:38:03.670771 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:03.673688 systemd[1]: Reached target network.target - Network. Sep 12 17:38:03.675349 systemd-networkd[890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:03.675354 systemd-networkd[890]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:03.746771 kernel: mlx5_core 6432:00:02.0 enP25650s1: Link up Sep 12 17:38:03.750763 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:38:03.784772 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: Data path switched to VF: enP25650s1 Sep 12 17:38:03.785009 systemd-networkd[890]: enP25650s1: Link UP Sep 12 17:38:03.785175 systemd-networkd[890]: eth0: Link UP Sep 12 17:38:03.789498 systemd-networkd[890]: eth0: Gained carrier Sep 12 17:38:03.789523 systemd-networkd[890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:03.798942 systemd-networkd[890]: enP25650s1: Gained carrier Sep 12 17:38:03.822814 systemd-networkd[890]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 12 17:38:04.756548 ignition[896]: Ignition 2.19.0 Sep 12 17:38:04.756560 ignition[896]: Stage: fetch-offline Sep 12 17:38:04.756603 ignition[896]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.756614 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.756720 ignition[896]: parsed url from cmdline: "" Sep 12 17:38:04.756725 ignition[896]: no config URL provided Sep 12 17:38:04.756731 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:04.756770 ignition[896]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:04.756790 ignition[896]: failed to fetch config: resource requires networking Sep 12 17:38:04.757036 ignition[896]: Ignition finished successfully Sep 12 17:38:04.782995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:04.792081 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:38:04.807731 ignition[905]: Ignition 2.19.0 Sep 12 17:38:04.807765 ignition[905]: Stage: fetch Sep 12 17:38:04.807984 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.807997 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.808081 ignition[905]: parsed url from cmdline: "" Sep 12 17:38:04.808084 ignition[905]: no config URL provided Sep 12 17:38:04.808088 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:04.808095 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:04.809789 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:38:04.874024 ignition[905]: GET result: OK Sep 12 17:38:04.874147 ignition[905]: config has been read from IMDS userdata Sep 12 17:38:04.874201 ignition[905]: parsing config with SHA512: c9279c57b1badbf6551404bf8a995e5b5a66be33fea8e43395d54c98dc003464e7b10db2bdadf8a116ee95a19308b8cb3739ed336c4096a6c75ac6c57ff0cf12 Sep 12 17:38:04.880772 unknown[905]: fetched base config from "system" Sep 12 17:38:04.881160 ignition[905]: fetch: fetch complete Sep 12 17:38:04.880780 unknown[905]: fetched base config from "system" Sep 12 17:38:04.881166 ignition[905]: fetch: fetch passed Sep 12 17:38:04.880786 unknown[905]: fetched user config from "azure" Sep 12 17:38:04.881210 ignition[905]: Ignition finished successfully Sep 12 17:38:04.884911 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:38:04.894903 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:38:04.912650 ignition[911]: Ignition 2.19.0 Sep 12 17:38:04.912660 ignition[911]: Stage: kargs Sep 12 17:38:04.912901 ignition[911]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.912917 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.913782 ignition[911]: kargs: kargs passed Sep 12 17:38:04.913826 ignition[911]: Ignition finished successfully Sep 12 17:38:04.926497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:38:04.940897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:38:04.956107 ignition[917]: Ignition 2.19.0 Sep 12 17:38:04.956118 ignition[917]: Stage: disks Sep 12 17:38:04.959216 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:38:04.956328 ignition[917]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.963808 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:04.956341 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.966779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:38:04.957245 ignition[917]: disks: disks passed Sep 12 17:38:04.972654 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:04.957287 ignition[917]: Ignition finished successfully Sep 12 17:38:04.977418 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:04.980282 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:05.008024 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:38:05.057678 systemd-fsck[925]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:38:05.062470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:38:05.080888 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:38:05.177761 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:38:05.177815 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:38:05.182543 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:05.220867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:05.238762 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (936) Sep 12 17:38:05.239872 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:38:05.251903 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:05.251933 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:05.251950 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:38:05.252811 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:38:05.268879 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:38:05.260920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:38:05.260953 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:05.267627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:38:05.282904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:05.289937 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:38:05.487083 systemd-networkd[890]: eth0: Gained IPv6LL Sep 12 17:38:05.911412 coreos-metadata[951]: Sep 12 17:38:05.911 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:38:05.915419 coreos-metadata[951]: Sep 12 17:38:05.914 INFO Fetch successful Sep 12 17:38:05.915419 coreos-metadata[951]: Sep 12 17:38:05.914 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:38:05.923813 coreos-metadata[951]: Sep 12 17:38:05.923 INFO Fetch successful Sep 12 17:38:05.936255 coreos-metadata[951]: Sep 12 17:38:05.936 INFO wrote hostname ci-4081.3.6-a-da806c5a3d to /sysroot/etc/hostname Sep 12 17:38:05.942071 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:06.004233 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:38:06.038355 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:38:06.059595 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:38:06.064898 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:38:07.021463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:07.030872 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:38:07.044756 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:07.044947 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:38:07.050852 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:38:07.079198 ignition[1053]: INFO : Ignition 2.19.0 Sep 12 17:38:07.083937 ignition[1053]: INFO : Stage: mount Sep 12 17:38:07.083937 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:07.083937 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:07.083937 ignition[1053]: INFO : mount: mount passed Sep 12 17:38:07.083937 ignition[1053]: INFO : Ignition finished successfully Sep 12 17:38:07.084865 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:38:07.093568 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:38:07.111820 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:38:07.117828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:07.146768 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1066) Sep 12 17:38:07.153352 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:07.153418 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:07.155793 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:38:07.163769 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:38:07.164427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:07.189977 ignition[1083]: INFO : Ignition 2.19.0 Sep 12 17:38:07.189977 ignition[1083]: INFO : Stage: files Sep 12 17:38:07.194043 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:07.194043 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:07.194043 ignition[1083]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:38:07.203004 ignition[1083]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:38:07.203004 ignition[1083]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:38:07.275724 ignition[1083]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:38:07.282131 ignition[1083]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:38:07.282131 ignition[1083]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:38:07.276113 unknown[1083]: wrote ssh authorized keys file for user: core Sep 12 17:38:07.323467 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:38:07.328639 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:38:07.439874 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:38:07.550982 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:38:07.550982 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:38:07.843355 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:38:08.232646 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:08.232646 ignition[1083]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:38:08.258085 ignition[1083]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: files passed Sep 12 17:38:08.271001 ignition[1083]: INFO : Ignition finished successfully Sep 12 17:38:08.260143 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:38:08.298409 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:38:08.327949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:38:08.331549 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:38:08.331642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:38:08.355450 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:08.355450 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:08.365864 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:08.359483 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:08.363566 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:38:08.386983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:38:08.412718 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:38:08.412857 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:38:08.421872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:38:08.424661 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:38:08.431919 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:38:08.441004 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:38:08.454905 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:08.466872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:38:08.478619 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:08.485066 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:08.490924 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:38:08.493262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:38:08.493388 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:08.502582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:38:08.512903 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:38:08.517279 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:38:08.523564 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:08.529967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:08.536254 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:38:08.539041 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:08.544578 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:38:08.550563 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:38:08.555595 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:38:08.560323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:38:08.565490 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:08.571091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:08.576765 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:08.583210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:38:08.586025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:08.590022 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:38:08.590155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:08.601951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:38:08.602144 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:08.607816 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:38:08.607945 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:38:08.618087 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:38:08.618240 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:08.635943 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:38:08.638497 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:38:08.638670 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:08.652994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:38:08.655321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:38:08.663596 ignition[1135]: INFO : Ignition 2.19.0 Sep 12 17:38:08.663596 ignition[1135]: INFO : Stage: umount Sep 12 17:38:08.663596 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:08.663596 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:08.663596 ignition[1135]: INFO : umount: umount passed Sep 12 17:38:08.663596 ignition[1135]: INFO : Ignition finished successfully Sep 12 17:38:08.655507 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:08.664238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:38:08.664439 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:08.672812 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:38:08.672908 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:38:08.676331 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:38:08.676568 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:38:08.683286 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:38:08.683339 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:38:08.688149 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:38:08.688202 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:38:08.693568 systemd[1]: Stopped target network.target - Network. Sep 12 17:38:08.698246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:38:08.698308 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:08.701397 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:38:08.708909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:38:08.714054 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:08.723707 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:38:08.728363 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:38:08.730904 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:38:08.730970 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:08.736004 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:38:08.736060 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:08.741039 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:38:08.741103 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:38:08.746215 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:38:08.746269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:08.749457 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:38:08.752489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:38:08.770800 systemd-networkd[890]: eth0: DHCPv6 lease lost Sep 12 17:38:08.771803 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:38:08.772527 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:38:08.772616 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:38:08.782484 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:38:08.784789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:38:08.790638 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:38:08.791496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:38:08.797466 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:38:08.797536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:08.821580 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:38:08.829661 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:38:08.829793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:08.835329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:38:08.835390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:08.867219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:38:08.867306 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:08.870317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:38:08.870377 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:08.874062 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:08.904350 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:38:08.904524 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:08.913983 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:38:08.914043 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:08.921913 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:38:08.921967 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:08.930341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:38:08.933105 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:08.943305 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: Data path switched from VF: enP25650s1 Sep 12 17:38:08.943538 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:38:08.943606 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:08.951528 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:08.951597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:08.967962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:38:08.971065 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:38:08.971140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:08.975478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:08.975537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:08.981243 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:38:08.981347 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:38:08.986784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:38:08.986875 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:38:09.291891 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:38:09.292028 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:38:09.297631 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:38:09.300344 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:38:09.300417 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:09.312422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:38:09.711348 systemd[1]: Switching root. Sep 12 17:38:09.793168 systemd-journald[176]: Journal stopped Sep 12 17:37:59.141919 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:37:59.141956 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:37:59.141969 kernel: BIOS-provided physical RAM map: Sep 12 17:37:59.141980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:37:59.141989 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 12 17:37:59.141999 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 12 17:37:59.142011 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 12 17:37:59.142026 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 12 17:37:59.142052 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 12 17:37:59.142064 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 12 17:37:59.142075 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 12 17:37:59.142086 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 12 17:37:59.142097 kernel: printk: bootconsole [earlyser0] enabled Sep 12 17:37:59.142108 kernel: NX (Execute Disable) protection: active Sep 12 17:37:59.142125 kernel: APIC: Static calls initialized Sep 12 17:37:59.142138 kernel: efi: EFI v2.7 by Microsoft Sep 12 17:37:59.142151 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 12 17:37:59.142163 kernel: SMBIOS 3.1.0 present. Sep 12 17:37:59.142176 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 12 17:37:59.142188 kernel: Hypervisor detected: Microsoft Hyper-V Sep 12 17:37:59.142201 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 12 17:37:59.142214 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Sep 12 17:37:59.142224 kernel: Hyper-V: Nested features: 0x1e0101 Sep 12 17:37:59.142237 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 12 17:37:59.142252 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 12 17:37:59.142264 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:37:59.142276 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:37:59.142289 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 12 17:37:59.142300 kernel: tsc: Detected 2593.907 MHz processor Sep 12 17:37:59.142312 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:37:59.142323 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:37:59.142335 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 12 17:37:59.142347 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:37:59.142362 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:37:59.142389 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 12 17:37:59.142401 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 12 17:37:59.142412 kernel: Using GB pages for direct mapping Sep 12 17:37:59.142422 kernel: Secure boot disabled Sep 12 17:37:59.142434 kernel: ACPI: Early table checksum verification disabled Sep 12 17:37:59.142447 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 12 17:37:59.142465 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142484 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142497 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 12 17:37:59.142509 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 12 17:37:59.142522 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142536 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142551 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142568 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142582 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142597 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142611 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:37:59.142625 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 12 17:37:59.142639 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 12 17:37:59.142653 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 12 17:37:59.142667 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 12 17:37:59.142686 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 12 17:37:59.142701 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 12 17:37:59.142715 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 12 17:37:59.142728 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 12 17:37:59.142741 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 12 17:37:59.142755 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 12 17:37:59.142769 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:37:59.142783 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:37:59.142796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 12 17:37:59.142813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 12 17:37:59.142826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 12 17:37:59.142840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 12 17:37:59.142853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 12 17:37:59.142866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 12 17:37:59.142880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 12 17:37:59.142893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 12 17:37:59.142906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 12 17:37:59.142920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 12 17:37:59.142936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 12 17:37:59.142949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 12 17:37:59.142963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 12 17:37:59.142976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 12 17:37:59.142990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 12 17:37:59.143003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 12 17:37:59.143016 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 12 17:37:59.144254 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 12 17:37:59.144278 kernel: Zone ranges: Sep 12 17:37:59.151406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:37:59.151428 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:37:59.151443 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:37:59.151458 kernel: Movable zone start for each node Sep 12 17:37:59.151473 kernel: Early memory node ranges Sep 12 17:37:59.151487 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:37:59.151503 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 12 17:37:59.151517 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 12 17:37:59.151532 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:37:59.151552 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 12 17:37:59.151566 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:37:59.151580 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:37:59.151595 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 12 17:37:59.151609 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 12 17:37:59.151623 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 12 17:37:59.151637 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:37:59.151652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:37:59.151667 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:37:59.151685 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 12 17:37:59.151699 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:37:59.151713 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 12 17:37:59.151728 kernel: Booting paravirtualized kernel on Hyper-V Sep 12 17:37:59.151742 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:37:59.151757 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:37:59.151771 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:37:59.151785 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:37:59.151799 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:37:59.151816 kernel: Hyper-V: PV spinlocks enabled Sep 12 17:37:59.151830 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:37:59.151846 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:37:59.151861 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:37:59.151875 kernel: random: crng init done Sep 12 17:37:59.151889 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:37:59.151904 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:37:59.151918 kernel: Fallback order for Node 0: 0 Sep 12 17:37:59.151936 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 12 17:37:59.151961 kernel: Policy zone: Normal Sep 12 17:37:59.151979 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:37:59.151994 kernel: software IO TLB: area num 2. Sep 12 17:37:59.152010 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 310124K reserved, 0K cma-reserved) Sep 12 17:37:59.152026 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:37:59.152057 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:37:59.152073 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:37:59.152088 kernel: Dynamic Preempt: voluntary Sep 12 17:37:59.152103 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:37:59.152117 kernel: rcu: RCU event tracing is enabled. Sep 12 17:37:59.152134 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:37:59.152147 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:37:59.152161 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:37:59.152175 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:37:59.152188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:37:59.152204 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:37:59.152218 kernel: Using NULL legacy PIC Sep 12 17:37:59.152231 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 12 17:37:59.152245 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:37:59.152259 kernel: Console: colour dummy device 80x25 Sep 12 17:37:59.152272 kernel: printk: console [tty1] enabled Sep 12 17:37:59.152285 kernel: printk: console [ttyS0] enabled Sep 12 17:37:59.152298 kernel: printk: bootconsole [earlyser0] disabled Sep 12 17:37:59.152312 kernel: ACPI: Core revision 20230628 Sep 12 17:37:59.152325 kernel: Failed to register legacy timer interrupt Sep 12 17:37:59.152342 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:37:59.152356 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:37:59.152369 kernel: Hyper-V: Using IPI hypercalls Sep 12 17:37:59.152383 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 12 17:37:59.152397 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 12 17:37:59.152410 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 12 17:37:59.152424 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 12 17:37:59.152438 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 12 17:37:59.152452 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 12 17:37:59.152469 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Sep 12 17:37:59.152483 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:37:59.152496 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:37:59.152510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:37:59.152524 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:37:59.152538 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:37:59.152552 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:37:59.152566 kernel: RETBleed: Vulnerable Sep 12 17:37:59.152579 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:37:59.152596 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:37:59.152610 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:37:59.152624 kernel: active return thunk: its_return_thunk Sep 12 17:37:59.152638 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:37:59.152651 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:37:59.152665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:37:59.152679 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:37:59.152693 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:37:59.152707 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:37:59.152722 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:37:59.152736 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:37:59.152754 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 12 17:37:59.152767 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 12 17:37:59.152781 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 12 17:37:59.152795 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 12 17:37:59.152810 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:37:59.152825 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:37:59.152839 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:37:59.152853 kernel: landlock: Up and running. Sep 12 17:37:59.152867 kernel: SELinux: Initializing. Sep 12 17:37:59.152882 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.152896 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.152911 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:37:59.152928 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:37:59.152943 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:37:59.152958 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:37:59.152973 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:37:59.152987 kernel: signal: max sigframe size: 3632 Sep 12 17:37:59.153002 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:37:59.153016 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:37:59.153073 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:37:59.153088 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:37:59.153104 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:37:59.153122 kernel: .... node #0, CPUs: #1 Sep 12 17:37:59.153140 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 12 17:37:59.153158 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:37:59.153173 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:37:59.153189 kernel: smpboot: Max logical packages: 1 Sep 12 17:37:59.153205 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 12 17:37:59.153221 kernel: devtmpfs: initialized Sep 12 17:37:59.153240 kernel: x86/mm: Memory block size: 128MB Sep 12 17:37:59.153254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 12 17:37:59.153270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:37:59.153283 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:37:59.153298 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:37:59.153312 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:37:59.153326 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:37:59.153340 kernel: audit: type=2000 audit(1757698677.029:1): state=initialized audit_enabled=0 res=1 Sep 12 17:37:59.153354 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:37:59.153371 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:37:59.153385 kernel: cpuidle: using governor menu Sep 12 17:37:59.153399 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:37:59.153412 kernel: dca service started, version 1.12.1 Sep 12 17:37:59.153427 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 12 17:37:59.153443 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:37:59.153457 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:37:59.153472 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:37:59.153487 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:37:59.153504 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:37:59.153519 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:37:59.153533 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:37:59.153547 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:37:59.153561 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:37:59.153575 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:37:59.153590 kernel: ACPI: Interpreter enabled Sep 12 17:37:59.153604 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:37:59.153619 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:37:59.153637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:37:59.153651 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:37:59.153665 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 12 17:37:59.153677 kernel: iommu: Default domain type: Translated Sep 12 17:37:59.153690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:37:59.153704 kernel: efivars: Registered efivars operations Sep 12 17:37:59.153718 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:37:59.153733 kernel: PCI: System does not support PCI Sep 12 17:37:59.153747 kernel: vgaarb: loaded Sep 12 17:37:59.153767 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 12 17:37:59.153782 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:37:59.153795 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:37:59.153808 kernel: pnp: PnP ACPI init Sep 12 17:37:59.153822 kernel: pnp: PnP ACPI: found 3 devices Sep 12 17:37:59.153837 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:37:59.153851 kernel: NET: Registered PF_INET protocol family Sep 12 17:37:59.153863 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:37:59.153877 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:37:59.153895 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:37:59.153910 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:37:59.153925 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:37:59.153939 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:37:59.153953 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.153967 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:37:59.153981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:37:59.153996 kernel: NET: Registered PF_XDP protocol family Sep 12 17:37:59.154010 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:37:59.154027 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:37:59.154058 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Sep 12 17:37:59.154073 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:37:59.154086 kernel: Initialise system trusted keyrings Sep 12 17:37:59.154101 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:37:59.154115 kernel: Key type asymmetric registered Sep 12 17:37:59.154130 kernel: Asymmetric key parser 'x509' registered Sep 12 17:37:59.154145 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:37:59.154159 kernel: io scheduler mq-deadline registered Sep 12 17:37:59.154178 kernel: io scheduler kyber registered Sep 12 17:37:59.154193 kernel: io scheduler bfq registered Sep 12 17:37:59.154208 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:37:59.154223 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:37:59.154238 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:37:59.154254 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:37:59.154269 kernel: i8042: PNP: No PS/2 controller found. Sep 12 17:37:59.154463 kernel: rtc_cmos 00:02: registered as rtc0 Sep 12 17:37:59.154595 kernel: rtc_cmos 00:02: setting system clock to 2025-09-12T17:37:58 UTC (1757698678) Sep 12 17:37:59.154715 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 12 17:37:59.154734 kernel: intel_pstate: CPU model not supported Sep 12 17:37:59.154749 kernel: efifb: probing for efifb Sep 12 17:37:59.154765 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:37:59.154780 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:37:59.154795 kernel: efifb: scrolling: redraw Sep 12 17:37:59.154810 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:37:59.154829 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:37:59.154844 kernel: fb0: EFI VGA frame buffer device Sep 12 17:37:59.154860 kernel: pstore: Using crash dump compression: deflate Sep 12 17:37:59.154875 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:37:59.154890 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:37:59.154906 kernel: Segment Routing with IPv6 Sep 12 17:37:59.154921 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:37:59.154936 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:37:59.154952 kernel: Key type dns_resolver registered Sep 12 17:37:59.154967 kernel: IPI shorthand broadcast: enabled Sep 12 17:37:59.154985 kernel: sched_clock: Marking stable (928110200, 52904000)->(1213359300, -232345100) Sep 12 17:37:59.155000 kernel: registered taskstats version 1 Sep 12 17:37:59.155016 kernel: Loading compiled-in X.509 certificates Sep 12 17:37:59.155087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:37:59.155101 kernel: Key type .fscrypt registered Sep 12 17:37:59.155114 kernel: Key type fscrypt-provisioning registered Sep 12 17:37:59.155128 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:37:59.155146 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:37:59.155165 kernel: ima: No architecture policies found Sep 12 17:37:59.155178 kernel: clk: Disabling unused clocks Sep 12 17:37:59.155191 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:37:59.155205 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:37:59.155219 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:37:59.155234 kernel: Run /init as init process Sep 12 17:37:59.155249 kernel: with arguments: Sep 12 17:37:59.155263 kernel: /init Sep 12 17:37:59.155277 kernel: with environment: Sep 12 17:37:59.155292 kernel: HOME=/ Sep 12 17:37:59.155311 kernel: TERM=linux Sep 12 17:37:59.155326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:37:59.155344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:37:59.155362 systemd[1]: Detected virtualization microsoft. Sep 12 17:37:59.155377 systemd[1]: Detected architecture x86-64. Sep 12 17:37:59.155392 systemd[1]: Running in initrd. Sep 12 17:37:59.155419 systemd[1]: No hostname configured, using default hostname. Sep 12 17:37:59.155438 systemd[1]: Hostname set to . Sep 12 17:37:59.155453 systemd[1]: Initializing machine ID from random generator. Sep 12 17:37:59.155467 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:37:59.155481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:37:59.155497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:37:59.155518 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:37:59.155534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:37:59.155550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:37:59.155569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:37:59.155586 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:37:59.155601 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:37:59.155617 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:37:59.155633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:37:59.155648 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:37:59.155662 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:37:59.155681 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:37:59.155696 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:37:59.155713 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:37:59.155728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:37:59.155742 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:37:59.155757 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:37:59.155773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:37:59.155788 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:37:59.155803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:37:59.155821 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:37:59.155838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:37:59.155854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:37:59.155870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:37:59.155886 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:37:59.155902 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:37:59.155918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:37:59.155963 systemd-journald[176]: Collecting audit messages is disabled. Sep 12 17:37:59.156003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:37:59.156017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:37:59.156049 systemd-journald[176]: Journal started Sep 12 17:37:59.156084 systemd-journald[176]: Runtime Journal (/run/log/journal/bfabc79ea2104116bac958d4c926f911) is 8.0M, max 158.8M, 150.8M free. Sep 12 17:37:59.140682 systemd-modules-load[177]: Inserted module 'overlay' Sep 12 17:37:59.167386 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:37:59.171288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:37:59.180327 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:37:59.185390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:37:59.199477 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:37:59.199508 kernel: Bridge firewalling registered Sep 12 17:37:59.199241 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 12 17:37:59.202208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:37:59.214210 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:37:59.228234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:37:59.241456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:37:59.249165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:37:59.257580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:37:59.266067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:37:59.268574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:37:59.269649 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:37:59.289280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:37:59.301236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:37:59.307757 dracut-cmdline[206]: dracut-dracut-053 Sep 12 17:37:59.307757 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:37:59.313099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:37:59.357969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:37:59.377267 systemd-resolved[211]: Positive Trust Anchors: Sep 12 17:37:59.377285 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:37:59.377344 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:37:59.380968 systemd-resolved[211]: Defaulting to hostname 'linux'. Sep 12 17:37:59.382267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:37:59.387162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:37:59.423048 kernel: SCSI subsystem initialized Sep 12 17:37:59.433052 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:37:59.444054 kernel: iscsi: registered transport (tcp) Sep 12 17:37:59.466037 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:37:59.466117 kernel: QLogic iSCSI HBA Driver Sep 12 17:37:59.503293 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:37:59.515229 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:37:59.545616 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:37:59.545727 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:37:59.548913 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:37:59.590054 kernel: raid6: avx512x4 gen() 18410 MB/s Sep 12 17:37:59.610046 kernel: raid6: avx512x2 gen() 18221 MB/s Sep 12 17:37:59.629041 kernel: raid6: avx512x1 gen() 18345 MB/s Sep 12 17:37:59.648042 kernel: raid6: avx2x4 gen() 18328 MB/s Sep 12 17:37:59.667045 kernel: raid6: avx2x2 gen() 18342 MB/s Sep 12 17:37:59.694571 kernel: raid6: avx2x1 gen() 14094 MB/s Sep 12 17:37:59.694653 kernel: raid6: using algorithm avx512x4 gen() 18410 MB/s Sep 12 17:37:59.716195 kernel: raid6: .... xor() 7972 MB/s, rmw enabled Sep 12 17:37:59.716226 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:37:59.739057 kernel: xor: automatically using best checksumming function avx Sep 12 17:37:59.886061 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:37:59.895495 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:37:59.903304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:37:59.914948 systemd-udevd[394]: Using default interface naming scheme 'v255'. Sep 12 17:37:59.918879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:37:59.940208 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:37:59.953421 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 12 17:37:59.981222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:37:59.990163 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:00.033546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:00.045247 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:38:00.068848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:00.080009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:00.089738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:00.098355 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:00.110194 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:38:00.140519 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:38:00.153634 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:00.163975 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:38:00.164058 kernel: AES CTR mode by8 optimization enabled Sep 12 17:38:00.168298 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:00.170771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:00.180045 kernel: hv_vmbus: Vmbus version:5.2 Sep 12 17:38:00.182684 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:00.208182 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:38:00.188222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:00.188521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:00.192184 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:00.212930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:00.232618 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:38:00.252939 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:38:00.253012 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:38:00.262063 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:38:00.268449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:00.279049 kernel: scsi host1: storvsc_host_t Sep 12 17:38:00.279117 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:38:00.283365 kernel: scsi host0: storvsc_host_t Sep 12 17:38:00.290617 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:38:00.290680 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:38:00.290917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:00.301907 kernel: PTP clock support registered Sep 12 17:38:00.301936 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:38:00.310204 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:38:00.310248 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:38:00.311163 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:38:00.315561 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:38:00.315603 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:38:00.827520 systemd-resolved[211]: Clock change detected. Flushing caches. Sep 12 17:38:00.835853 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:38:00.844612 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:00.858818 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:38:00.858879 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:38:00.870633 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:38:00.870942 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:38:00.872905 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:38:00.888769 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:38:00.889192 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:38:00.891847 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:38:00.894513 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:38:00.894657 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:38:00.906775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:00.909764 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:38:01.011340 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: VF slot 1 added Sep 12 17:38:01.021927 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:38:01.022001 kernel: hv_pci 5a0d14ca-6432-4393-a3b4-171f5e6fc152: PCI VMBus probing: Using version 0x10004 Sep 12 17:38:01.029876 kernel: hv_pci 5a0d14ca-6432-4393-a3b4-171f5e6fc152: PCI host bridge to bus 6432:00 Sep 12 17:38:01.030170 kernel: pci_bus 6432:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 12 17:38:01.033156 kernel: pci_bus 6432:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:38:01.038134 kernel: pci 6432:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 12 17:38:01.042874 kernel: pci 6432:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 12 17:38:01.045869 kernel: pci 6432:00:02.0: enabling Extended Tags Sep 12 17:38:01.060852 kernel: pci 6432:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6432:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 12 17:38:01.067980 kernel: pci_bus 6432:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:38:01.068242 kernel: pci 6432:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 12 17:38:01.240634 kernel: mlx5_core 6432:00:02.0: enabling device (0000 -> 0002) Sep 12 17:38:01.245769 kernel: mlx5_core 6432:00:02.0: firmware version: 14.30.5000 Sep 12 17:38:01.398926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:38:01.439762 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (447) Sep 12 17:38:01.450426 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (464) Sep 12 17:38:01.466436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:38:01.479765 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: VF registering: eth1 Sep 12 17:38:01.483772 kernel: mlx5_core 6432:00:02.0 eth1: joined to eth0 Sep 12 17:38:01.485428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:38:01.497153 kernel: mlx5_core 6432:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 17:38:01.498202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:38:01.508232 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:38:01.522820 kernel: mlx5_core 6432:00:02.0 enP25650s1: renamed from eth1 Sep 12 17:38:01.525905 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:38:01.546771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:01.553760 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:02.568761 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:38:02.570408 disk-uuid[599]: The operation has completed successfully. Sep 12 17:38:02.675649 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:38:02.675779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:38:02.689931 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:38:02.695004 sh[712]: Success Sep 12 17:38:02.723766 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:38:03.034435 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:38:03.047881 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:38:03.054179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:38:03.094770 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:38:03.094834 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:03.100713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:38:03.106094 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:38:03.108775 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:38:03.489161 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:38:03.495800 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:38:03.506933 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:38:03.513457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:38:03.548304 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:03.548396 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:03.551231 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:38:03.601753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:03.611614 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:38:03.620918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:03.629975 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:03.627678 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:38:03.641139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:38:03.651014 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:38:03.668413 systemd-networkd[890]: lo: Link UP Sep 12 17:38:03.668423 systemd-networkd[890]: lo: Gained carrier Sep 12 17:38:03.670618 systemd-networkd[890]: Enumeration completed Sep 12 17:38:03.670771 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:03.673688 systemd[1]: Reached target network.target - Network. Sep 12 17:38:03.675349 systemd-networkd[890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:03.675354 systemd-networkd[890]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:03.746771 kernel: mlx5_core 6432:00:02.0 enP25650s1: Link up Sep 12 17:38:03.750763 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:38:03.784772 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: Data path switched to VF: enP25650s1 Sep 12 17:38:03.785009 systemd-networkd[890]: enP25650s1: Link UP Sep 12 17:38:03.785175 systemd-networkd[890]: eth0: Link UP Sep 12 17:38:03.789498 systemd-networkd[890]: eth0: Gained carrier Sep 12 17:38:03.789523 systemd-networkd[890]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:03.798942 systemd-networkd[890]: enP25650s1: Gained carrier Sep 12 17:38:03.822814 systemd-networkd[890]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 12 17:38:04.756548 ignition[896]: Ignition 2.19.0 Sep 12 17:38:04.756560 ignition[896]: Stage: fetch-offline Sep 12 17:38:04.756603 ignition[896]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.756614 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.756720 ignition[896]: parsed url from cmdline: "" Sep 12 17:38:04.756725 ignition[896]: no config URL provided Sep 12 17:38:04.756731 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:04.756770 ignition[896]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:04.756790 ignition[896]: failed to fetch config: resource requires networking Sep 12 17:38:04.757036 ignition[896]: Ignition finished successfully Sep 12 17:38:04.782995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:04.792081 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:38:04.807731 ignition[905]: Ignition 2.19.0 Sep 12 17:38:04.807765 ignition[905]: Stage: fetch Sep 12 17:38:04.807984 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.807997 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.808081 ignition[905]: parsed url from cmdline: "" Sep 12 17:38:04.808084 ignition[905]: no config URL provided Sep 12 17:38:04.808088 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:04.808095 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:04.809789 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:38:04.874024 ignition[905]: GET result: OK Sep 12 17:38:04.874147 ignition[905]: config has been read from IMDS userdata Sep 12 17:38:04.874201 ignition[905]: parsing config with SHA512: c9279c57b1badbf6551404bf8a995e5b5a66be33fea8e43395d54c98dc003464e7b10db2bdadf8a116ee95a19308b8cb3739ed336c4096a6c75ac6c57ff0cf12 Sep 12 17:38:04.880772 unknown[905]: fetched base config from "system" Sep 12 17:38:04.881160 ignition[905]: fetch: fetch complete Sep 12 17:38:04.880780 unknown[905]: fetched base config from "system" Sep 12 17:38:04.881166 ignition[905]: fetch: fetch passed Sep 12 17:38:04.880786 unknown[905]: fetched user config from "azure" Sep 12 17:38:04.881210 ignition[905]: Ignition finished successfully Sep 12 17:38:04.884911 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:38:04.894903 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:38:04.912650 ignition[911]: Ignition 2.19.0 Sep 12 17:38:04.912660 ignition[911]: Stage: kargs Sep 12 17:38:04.912901 ignition[911]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.912917 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.913782 ignition[911]: kargs: kargs passed Sep 12 17:38:04.913826 ignition[911]: Ignition finished successfully Sep 12 17:38:04.926497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:38:04.940897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:38:04.956107 ignition[917]: Ignition 2.19.0 Sep 12 17:38:04.956118 ignition[917]: Stage: disks Sep 12 17:38:04.959216 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:38:04.956328 ignition[917]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:04.963808 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:04.956341 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:04.966779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:38:04.957245 ignition[917]: disks: disks passed Sep 12 17:38:04.972654 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:04.957287 ignition[917]: Ignition finished successfully Sep 12 17:38:04.977418 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:04.980282 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:05.008024 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:38:05.057678 systemd-fsck[925]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:38:05.062470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:38:05.080888 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:38:05.177761 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:38:05.177815 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:38:05.182543 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:05.220867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:05.238762 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (936) Sep 12 17:38:05.239872 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:38:05.251903 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:05.251933 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:05.251950 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:38:05.252811 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:38:05.268879 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:38:05.260920 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:38:05.260953 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:05.267627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:38:05.282904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:05.289937 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:38:05.487083 systemd-networkd[890]: eth0: Gained IPv6LL Sep 12 17:38:05.911412 coreos-metadata[951]: Sep 12 17:38:05.911 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:38:05.915419 coreos-metadata[951]: Sep 12 17:38:05.914 INFO Fetch successful Sep 12 17:38:05.915419 coreos-metadata[951]: Sep 12 17:38:05.914 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:38:05.923813 coreos-metadata[951]: Sep 12 17:38:05.923 INFO Fetch successful Sep 12 17:38:05.936255 coreos-metadata[951]: Sep 12 17:38:05.936 INFO wrote hostname ci-4081.3.6-a-da806c5a3d to /sysroot/etc/hostname Sep 12 17:38:05.942071 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:06.004233 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:38:06.038355 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:38:06.059595 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:38:06.064898 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:38:07.021463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:07.030872 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:38:07.044756 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:07.044947 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:38:07.050852 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:38:07.079198 ignition[1053]: INFO : Ignition 2.19.0 Sep 12 17:38:07.083937 ignition[1053]: INFO : Stage: mount Sep 12 17:38:07.083937 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:07.083937 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:07.083937 ignition[1053]: INFO : mount: mount passed Sep 12 17:38:07.083937 ignition[1053]: INFO : Ignition finished successfully Sep 12 17:38:07.084865 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:38:07.093568 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:38:07.111820 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:38:07.117828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:07.146768 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1066) Sep 12 17:38:07.153352 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:07.153418 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:07.155793 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:38:07.163769 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:38:07.164427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:07.189977 ignition[1083]: INFO : Ignition 2.19.0 Sep 12 17:38:07.189977 ignition[1083]: INFO : Stage: files Sep 12 17:38:07.194043 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:07.194043 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:07.194043 ignition[1083]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:38:07.203004 ignition[1083]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:38:07.203004 ignition[1083]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:38:07.275724 ignition[1083]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:38:07.282131 ignition[1083]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:38:07.282131 ignition[1083]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:38:07.276113 unknown[1083]: wrote ssh authorized keys file for user: core Sep 12 17:38:07.323467 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:38:07.328639 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:38:07.439874 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:38:07.550982 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:38:07.550982 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:07.560867 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:38:07.843355 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:38:08.232646 ignition[1083]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:08.232646 ignition[1083]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:38:08.258085 ignition[1083]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:08.271001 ignition[1083]: INFO : files: files passed Sep 12 17:38:08.271001 ignition[1083]: INFO : Ignition finished successfully Sep 12 17:38:08.260143 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:38:08.298409 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:38:08.327949 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:38:08.331549 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:38:08.331642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:38:08.355450 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:08.355450 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:08.365864 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:08.359483 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:08.363566 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:38:08.386983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:38:08.412718 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:38:08.412857 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:38:08.421872 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:38:08.424661 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:38:08.431919 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:38:08.441004 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:38:08.454905 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:08.466872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:38:08.478619 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:08.485066 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:08.490924 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:38:08.493262 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:38:08.493388 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:08.502582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:38:08.512903 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:38:08.517279 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:38:08.523564 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:08.529967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:08.536254 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:38:08.539041 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:08.544578 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:38:08.550563 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:38:08.555595 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:38:08.560323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:38:08.565490 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:08.571091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:08.576765 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:08.583210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:38:08.586025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:08.590022 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:38:08.590155 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:08.601951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:38:08.602144 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:08.607816 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:38:08.607945 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:38:08.618087 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:38:08.618240 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:08.635943 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:38:08.638497 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:38:08.638670 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:08.652994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:38:08.655321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:38:08.663596 ignition[1135]: INFO : Ignition 2.19.0 Sep 12 17:38:08.663596 ignition[1135]: INFO : Stage: umount Sep 12 17:38:08.663596 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:08.663596 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:38:08.663596 ignition[1135]: INFO : umount: umount passed Sep 12 17:38:08.663596 ignition[1135]: INFO : Ignition finished successfully Sep 12 17:38:08.655507 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:08.664238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:38:08.664439 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:08.672812 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:38:08.672908 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:38:08.676331 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:38:08.676568 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:38:08.683286 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:38:08.683339 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:38:08.688149 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:38:08.688202 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:38:08.693568 systemd[1]: Stopped target network.target - Network. Sep 12 17:38:08.698246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:38:08.698308 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:08.701397 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:38:08.708909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:38:08.714054 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:08.723707 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:38:08.728363 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:38:08.730904 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:38:08.730970 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:08.736004 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:38:08.736060 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:08.741039 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:38:08.741103 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:38:08.746215 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:38:08.746269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:08.749457 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:38:08.752489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:38:08.770800 systemd-networkd[890]: eth0: DHCPv6 lease lost Sep 12 17:38:08.771803 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:38:08.772527 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:38:08.772616 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:38:08.782484 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:38:08.784789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:38:08.790638 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:38:08.791496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:38:08.797466 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:38:08.797536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:08.821580 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:38:08.829661 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:38:08.829793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:08.835329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:38:08.835390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:08.867219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:38:08.867306 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:08.870317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:38:08.870377 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:08.874062 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:08.904350 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:38:08.904524 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:08.913983 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:38:08.914043 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:08.921913 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:38:08.921967 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:08.930341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:38:08.933105 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:08.943305 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: Data path switched from VF: enP25650s1 Sep 12 17:38:08.943538 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:38:08.943606 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:08.951528 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:08.951597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:08.967962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:38:08.971065 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:38:08.971140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:08.975478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:08.975537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:08.981243 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:38:08.981347 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:38:08.986784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:38:08.986875 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:38:09.291891 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:38:09.292028 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:38:09.297631 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:38:09.300344 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:38:09.300417 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:09.312422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:38:09.711348 systemd[1]: Switching root. Sep 12 17:38:09.793168 systemd-journald[176]: Journal stopped Sep 12 17:38:16.797425 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 12 17:38:16.797466 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:38:16.797484 kernel: SELinux: policy capability open_perms=1 Sep 12 17:38:16.797498 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:38:16.797512 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:38:16.797525 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:38:16.797541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:38:16.797559 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:38:16.797573 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:38:16.797588 kernel: audit: type=1403 audit(1757698691.263:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:38:16.797604 systemd[1]: Successfully loaded SELinux policy in 209.917ms. Sep 12 17:38:16.797621 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.014ms. Sep 12 17:38:16.797640 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:16.797656 systemd[1]: Detected virtualization microsoft. Sep 12 17:38:16.797676 systemd[1]: Detected architecture x86-64. Sep 12 17:38:16.797692 systemd[1]: Detected first boot. Sep 12 17:38:16.797710 systemd[1]: Hostname set to . Sep 12 17:38:16.797726 systemd[1]: Initializing machine ID from random generator. Sep 12 17:38:16.797757 zram_generator::config[1177]: No configuration found. Sep 12 17:38:16.797778 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:38:16.797795 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:38:16.797811 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:38:16.797827 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:38:16.797845 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:38:16.797861 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:38:16.797879 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:38:16.797898 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:38:16.797915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:38:16.797933 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:38:16.797950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:38:16.797967 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:38:16.797984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:16.798001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:16.798019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:38:16.798038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:38:16.798056 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:38:16.798073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:16.798090 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:38:16.798107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:16.798125 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:38:16.798146 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:38:16.798163 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:16.798183 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:38:16.798201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:16.798219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:16.798236 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:16.798253 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:16.798271 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:38:16.798289 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:38:16.798309 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:16.798327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:16.798345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:16.798363 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:38:16.798381 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:38:16.798403 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:38:16.798421 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:38:16.798439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:16.798457 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:38:16.798475 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:38:16.798493 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:38:16.798511 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:38:16.798530 systemd[1]: Reached target machines.target - Containers. Sep 12 17:38:16.798550 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:38:16.798569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:16.798586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:16.798604 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:38:16.798622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:16.798640 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:16.798658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:16.798676 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:38:16.798694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:16.798714 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:38:16.798732 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:38:16.798757 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:38:16.798775 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:38:16.798794 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:38:16.798813 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:16.798830 kernel: fuse: init (API version 7.39) Sep 12 17:38:16.798847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:16.798868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:38:16.798886 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:38:16.798904 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:16.798921 kernel: loop: module loaded Sep 12 17:38:16.798938 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:38:16.798956 systemd[1]: Stopped verity-setup.service. Sep 12 17:38:16.798974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:16.799013 systemd-journald[1266]: Collecting audit messages is disabled. Sep 12 17:38:16.799051 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:38:16.799070 systemd-journald[1266]: Journal started Sep 12 17:38:16.799105 systemd-journald[1266]: Runtime Journal (/run/log/journal/312804737fdf438fa4edc621ee45bfcc) is 8.0M, max 158.8M, 150.8M free. Sep 12 17:38:15.980292 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:38:16.133857 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:38:16.134228 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:38:16.809759 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:16.812435 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:38:16.815482 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:38:16.818003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:38:16.821418 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:38:16.825982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:38:16.848681 kernel: ACPI: bus type drm_connector registered Sep 12 17:38:16.830284 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:38:16.834097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:16.837852 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:38:16.838050 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:38:16.841644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:16.841912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:16.845468 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:16.845676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:16.849474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:16.849661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:16.854278 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:38:16.854492 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:38:16.858164 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:16.858435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:16.862128 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:38:16.865941 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:38:16.881036 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:16.889470 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:38:16.908856 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:38:16.913284 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:38:16.916560 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:38:16.916705 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:16.921090 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:38:16.927023 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:38:16.932965 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:38:16.935978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:16.957911 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:38:16.961706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:38:16.964814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:16.973094 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:38:16.976095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:16.977154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:16.984853 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:38:16.995909 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:38:17.002103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:17.009459 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:38:17.013146 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:38:17.016651 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:38:17.017572 systemd-journald[1266]: Time spent on flushing to /var/log/journal/312804737fdf438fa4edc621ee45bfcc is 33.015ms for 956 entries. Sep 12 17:38:17.017572 systemd-journald[1266]: System Journal (/var/log/journal/312804737fdf438fa4edc621ee45bfcc) is 8.0M, max 2.6G, 2.6G free. Sep 12 17:38:17.066610 systemd-journald[1266]: Received client request to flush runtime journal. Sep 12 17:38:17.066652 kernel: loop0: detected capacity change from 0 to 224512 Sep 12 17:38:17.023002 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:38:17.029704 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:38:17.044564 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:38:17.060930 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:38:17.068479 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:38:17.085814 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:38:17.114905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:38:17.115561 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:38:17.156769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:38:17.188507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:17.223773 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 17:38:17.567666 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:38:17.575220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:17.734396 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 12 17:38:17.734419 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 12 17:38:17.740573 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:17.803037 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:38:18.458770 kernel: loop3: detected capacity change from 0 to 31056 Sep 12 17:38:18.843267 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:38:18.855904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:18.879882 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Sep 12 17:38:18.942767 kernel: loop4: detected capacity change from 0 to 224512 Sep 12 17:38:18.971767 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:38:18.995765 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 17:38:19.018767 kernel: loop7: detected capacity change from 0 to 31056 Sep 12 17:38:19.031234 (sd-merge)[1340]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:38:19.031831 (sd-merge)[1340]: Merged extensions into '/usr'. Sep 12 17:38:19.035866 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:38:19.035881 systemd[1]: Reloading... Sep 12 17:38:19.091777 zram_generator::config[1362]: No configuration found. Sep 12 17:38:19.265669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:19.324491 systemd[1]: Reloading finished in 288 ms. Sep 12 17:38:19.349921 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:38:19.362909 systemd[1]: Starting ensure-sysext.service... Sep 12 17:38:19.366619 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:19.397077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:19.412164 systemd[1]: Reloading requested from client PID 1424 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:38:19.412182 systemd[1]: Reloading... Sep 12 17:38:19.450386 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:38:19.452258 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:38:19.459490 systemd-tmpfiles[1425]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:38:19.462036 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Sep 12 17:38:19.462126 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Sep 12 17:38:19.520200 systemd-tmpfiles[1425]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:19.520770 systemd-tmpfiles[1425]: Skipping /boot Sep 12 17:38:19.569980 systemd-tmpfiles[1425]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:19.571071 systemd-tmpfiles[1425]: Skipping /boot Sep 12 17:38:19.587764 zram_generator::config[1476]: No configuration found. Sep 12 17:38:19.691843 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:38:19.698938 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:38:19.706828 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:38:19.706903 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:38:19.720112 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:38:19.723487 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:38:19.725765 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:38:19.746850 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:38:19.930683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:20.077558 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:38:20.078531 systemd[1]: Reloading finished in 665 ms. Sep 12 17:38:20.083765 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1438) Sep 12 17:38:20.111966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:20.181508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:20.190083 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:38:20.235308 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:38:20.238900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:20.241778 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 12 17:38:20.241952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:20.250053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:20.263498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:20.266440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:20.275017 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:38:20.278523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:20.284031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:20.289785 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:38:20.292825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:20.292921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:20.314457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:20.315182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:20.343673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:38:20.348924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:20.349123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:20.353328 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:20.353515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:20.376220 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:20.376579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:20.383025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:20.387599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:20.395899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:20.405278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:20.410294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:20.418038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:38:20.423016 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:38:20.440384 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:38:20.446266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:20.447249 systemd[1]: Finished ensure-sysext.service. Sep 12 17:38:20.450164 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:38:20.454255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:20.454436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:20.458181 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:38:20.463187 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:20.463352 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:20.466534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:20.466701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:20.470204 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:20.470355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:20.474055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:20.474279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:20.497012 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:38:20.500544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:20.500643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:20.505981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:20.517603 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:38:20.557210 augenrules[1639]: No rules Sep 12 17:38:20.558892 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:38:20.570416 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:38:20.650846 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:38:20.663722 lvm[1629]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:20.673292 systemd-resolved[1595]: Positive Trust Anchors: Sep 12 17:38:20.673315 systemd-resolved[1595]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:20.673374 systemd-resolved[1595]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:20.705710 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:38:20.706202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:20.715110 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:38:20.719807 lvm[1652]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:20.728723 systemd-resolved[1595]: Using system hostname 'ci-4081.3.6-a-da806c5a3d'. Sep 12 17:38:20.731074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:20.734495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:20.750811 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:38:20.824108 systemd-networkd[1594]: lo: Link UP Sep 12 17:38:20.824120 systemd-networkd[1594]: lo: Gained carrier Sep 12 17:38:20.826973 systemd-networkd[1594]: Enumeration completed Sep 12 17:38:20.827509 systemd-networkd[1594]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:20.827514 systemd-networkd[1594]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:20.827916 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:20.831176 systemd[1]: Reached target network.target - Network. Sep 12 17:38:20.838920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:38:20.886339 kernel: mlx5_core 6432:00:02.0 enP25650s1: Link up Sep 12 17:38:20.886680 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:38:20.905854 kernel: hv_netvsc 6045bdd2-7f6c-6045-bdd2-7f6c6045bdd2 eth0: Data path switched to VF: enP25650s1 Sep 12 17:38:20.906808 systemd-networkd[1594]: enP25650s1: Link UP Sep 12 17:38:20.906992 systemd-networkd[1594]: eth0: Link UP Sep 12 17:38:20.906998 systemd-networkd[1594]: eth0: Gained carrier Sep 12 17:38:20.907023 systemd-networkd[1594]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:20.913600 systemd-networkd[1594]: enP25650s1: Gained carrier Sep 12 17:38:20.976809 systemd-networkd[1594]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 12 17:38:21.762048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:22.190975 systemd-networkd[1594]: eth0: Gained IPv6LL Sep 12 17:38:22.194417 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:38:22.198377 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:38:22.845971 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:38:22.849928 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:26.215431 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:38:26.227318 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:38:26.236080 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:38:26.262154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:38:26.265424 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:26.268481 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:38:26.271782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:38:26.275564 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:38:26.278792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:38:26.281889 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:38:26.285392 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:38:26.285441 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:26.287782 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:26.318661 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:38:26.323026 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:38:26.330750 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:38:26.334182 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:38:26.337060 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:26.339711 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:26.342294 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:26.342337 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:26.366867 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:38:26.372875 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:38:26.386902 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:38:26.397413 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:38:26.403162 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:38:26.408926 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:38:26.411718 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:38:26.411846 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 17:38:26.418908 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:38:26.422001 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:38:26.423869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:26.425593 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 17:38:26.429571 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:38:26.434920 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:38:26.440869 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:38:26.442084 jq[1671]: false Sep 12 17:38:26.450960 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:38:26.456392 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:38:26.476331 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:38:26.479676 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:38:26.480344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:38:26.484022 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:38:26.490564 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:38:26.497190 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:38:26.498040 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:38:26.501533 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:38:26.501872 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:38:26.515895 KVP[1673]: KVP starting; pid is:1673 Sep 12 17:38:26.532150 chronyd[1698]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:38:26.543295 KVP[1673]: KVP LIC Version: 3.1 Sep 12 17:38:26.543755 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:38:26.548754 jq[1684]: true Sep 12 17:38:26.567469 (ntainerd)[1703]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:38:26.568512 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:38:26.568720 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:38:26.587103 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:38:26.595356 update_engine[1683]: I20250912 17:38:26.594941 1683 main.cc:92] Flatcar Update Engine starting Sep 12 17:38:26.597488 jq[1707]: true Sep 12 17:38:26.611826 chronyd[1698]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:38:26.612087 chronyd[1698]: Loaded seccomp filter (level 2) Sep 12 17:38:26.614106 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:38:26.616064 extend-filesystems[1672]: Found loop4 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found loop5 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found loop6 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found loop7 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda1 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda2 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda3 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found usr Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda4 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda6 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda7 Sep 12 17:38:26.620846 extend-filesystems[1672]: Found sda9 Sep 12 17:38:26.620846 extend-filesystems[1672]: Checking size of /dev/sda9 Sep 12 17:38:26.631457 systemd-logind[1680]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:38:26.633886 systemd-logind[1680]: New seat seat0. Sep 12 17:38:26.639337 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:38:26.699178 extend-filesystems[1672]: Old size kept for /dev/sda9 Sep 12 17:38:26.699178 extend-filesystems[1672]: Found sr0 Sep 12 17:38:26.714001 tar[1688]: linux-amd64/LICENSE Sep 12 17:38:26.714001 tar[1688]: linux-amd64/helm Sep 12 17:38:26.699502 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:38:26.700012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:38:26.766813 bash[1745]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:38:26.761262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:38:26.769517 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:38:26.772615 dbus-daemon[1668]: [system] SELinux support is enabled Sep 12 17:38:26.774473 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:38:26.785158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:38:26.785196 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:38:26.799162 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:38:26.799196 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:38:26.807785 dbus-daemon[1668]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:38:26.812968 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:38:26.821902 update_engine[1683]: I20250912 17:38:26.813149 1683 update_check_scheduler.cc:74] Next update check in 5m45s Sep 12 17:38:26.827607 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:38:26.877796 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1722) Sep 12 17:38:26.942622 coreos-metadata[1667]: Sep 12 17:38:26.942 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:38:26.948504 coreos-metadata[1667]: Sep 12 17:38:26.947 INFO Fetch successful Sep 12 17:38:26.948504 coreos-metadata[1667]: Sep 12 17:38:26.948 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:38:26.953903 coreos-metadata[1667]: Sep 12 17:38:26.953 INFO Fetch successful Sep 12 17:38:26.954047 coreos-metadata[1667]: Sep 12 17:38:26.953 INFO Fetching http://168.63.129.16/machine/ebf302da-f35f-46c5-91f6-c9d2e031d3f9/a47bafc7%2D9c13%2D4fc7%2D8b07%2D50fe4f432a77.%5Fci%2D4081.3.6%2Da%2Dda806c5a3d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:38:26.958906 coreos-metadata[1667]: Sep 12 17:38:26.958 INFO Fetch successful Sep 12 17:38:26.958906 coreos-metadata[1667]: Sep 12 17:38:26.958 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:38:26.974516 coreos-metadata[1667]: Sep 12 17:38:26.972 INFO Fetch successful Sep 12 17:38:27.087311 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:38:27.100017 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:38:27.110914 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:38:27.430406 sshd_keygen[1715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:38:27.468124 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:38:27.483677 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:38:27.490153 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:38:27.502214 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:38:27.502611 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:38:27.512264 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:38:27.558903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:38:27.579459 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:38:27.589506 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:38:27.593212 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:38:27.604972 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:38:27.704991 tar[1688]: linux-amd64/README.md Sep 12 17:38:27.717643 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:38:28.002822 containerd[1703]: time="2025-09-12T17:38:27.999556600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:38:28.029281 containerd[1703]: time="2025-09-12T17:38:28.029229600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.030812 containerd[1703]: time="2025-09-12T17:38:28.030764600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:28.030812 containerd[1703]: time="2025-09-12T17:38:28.030802900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:38:28.030943 containerd[1703]: time="2025-09-12T17:38:28.030822600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:38:28.031034 containerd[1703]: time="2025-09-12T17:38:28.031009900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:38:28.031081 containerd[1703]: time="2025-09-12T17:38:28.031037600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031142 containerd[1703]: time="2025-09-12T17:38:28.031117600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031185 containerd[1703]: time="2025-09-12T17:38:28.031138400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031347 containerd[1703]: time="2025-09-12T17:38:28.031321900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031347 containerd[1703]: time="2025-09-12T17:38:28.031343200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031435 containerd[1703]: time="2025-09-12T17:38:28.031373100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031435 containerd[1703]: time="2025-09-12T17:38:28.031392300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031531 containerd[1703]: time="2025-09-12T17:38:28.031508100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031769 containerd[1703]: time="2025-09-12T17:38:28.031731600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031902 containerd[1703]: time="2025-09-12T17:38:28.031879500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:28.031958 containerd[1703]: time="2025-09-12T17:38:28.031899800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:38:28.032030 containerd[1703]: time="2025-09-12T17:38:28.032007900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:38:28.032091 containerd[1703]: time="2025-09-12T17:38:28.032072200Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:38:28.064788 containerd[1703]: time="2025-09-12T17:38:28.064349500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:38:28.064788 containerd[1703]: time="2025-09-12T17:38:28.064435000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:38:28.064788 containerd[1703]: time="2025-09-12T17:38:28.064459300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:38:28.064788 containerd[1703]: time="2025-09-12T17:38:28.064480700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:38:28.064788 containerd[1703]: time="2025-09-12T17:38:28.064506600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:38:28.064788 containerd[1703]: time="2025-09-12T17:38:28.064704800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065513600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065703000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065753400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065779500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065804600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065848400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065870300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065897200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065917500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065940200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065963000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.065984800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.066028400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.066649 containerd[1703]: time="2025-09-12T17:38:28.066054500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066087200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066111800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066132900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066156100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066177700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066201100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066222500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066249200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066270500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066288700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066311200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066338100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066377700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066414300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.067223 containerd[1703]: time="2025-09-12T17:38:28.066438800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:38:28.067734 containerd[1703]: time="2025-09-12T17:38:28.066505200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:38:28.067734 containerd[1703]: time="2025-09-12T17:38:28.066536100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:38:28.067734 containerd[1703]: time="2025-09-12T17:38:28.066552800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:38:28.067734 containerd[1703]: time="2025-09-12T17:38:28.066573800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:38:28.067734 containerd[1703]: time="2025-09-12T17:38:28.066592000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.068102 containerd[1703]: time="2025-09-12T17:38:28.066614500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:38:28.069761 containerd[1703]: time="2025-09-12T17:38:28.068315600Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:38:28.069761 containerd[1703]: time="2025-09-12T17:38:28.068370100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:38:28.069895 containerd[1703]: time="2025-09-12T17:38:28.068795800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:38:28.069895 containerd[1703]: time="2025-09-12T17:38:28.068890700Z" level=info msg="Connect containerd service" Sep 12 17:38:28.069895 containerd[1703]: time="2025-09-12T17:38:28.068942100Z" level=info msg="using legacy CRI server" Sep 12 17:38:28.069895 containerd[1703]: time="2025-09-12T17:38:28.068953300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:38:28.069895 containerd[1703]: time="2025-09-12T17:38:28.069093800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:38:28.070249 containerd[1703]: time="2025-09-12T17:38:28.070040400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:38:28.070249 containerd[1703]: time="2025-09-12T17:38:28.070162200Z" level=info msg="Start subscribing containerd event" Sep 12 17:38:28.070249 containerd[1703]: time="2025-09-12T17:38:28.070229500Z" level=info msg="Start recovering state" Sep 12 17:38:28.070402 containerd[1703]: time="2025-09-12T17:38:28.070313100Z" level=info msg="Start event monitor" Sep 12 17:38:28.070402 containerd[1703]: time="2025-09-12T17:38:28.070339400Z" level=info msg="Start snapshots syncer" Sep 12 17:38:28.070402 containerd[1703]: time="2025-09-12T17:38:28.070352200Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:38:28.070402 containerd[1703]: time="2025-09-12T17:38:28.070370100Z" level=info msg="Start streaming server" Sep 12 17:38:28.071533 containerd[1703]: time="2025-09-12T17:38:28.070882900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:38:28.071533 containerd[1703]: time="2025-09-12T17:38:28.070946600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:38:28.071105 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:38:28.077973 containerd[1703]: time="2025-09-12T17:38:28.074443800Z" level=info msg="containerd successfully booted in 0.076205s" Sep 12 17:38:28.399478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:28.403877 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:38:28.406264 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:38:28.406871 systemd[1]: Startup finished in 1.057s (firmware) + 26.875s (loader) + 1.076s (kernel) + 11.823s (initrd) + 17.351s (userspace) = 58.184s. Sep 12 17:38:28.969399 login[1807]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:38:28.971262 login[1808]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:38:29.003892 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:38:29.012163 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:38:29.015582 systemd-logind[1680]: New session 1 of user core. Sep 12 17:38:29.019853 systemd-logind[1680]: New session 2 of user core. Sep 12 17:38:29.058837 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:38:29.068159 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:38:29.089503 (systemd)[1838]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:38:29.149909 kubelet[1827]: E0912 17:38:29.149866 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:38:29.154959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:38:29.155148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:38:29.155495 systemd[1]: kubelet.service: Consumed 1.039s CPU time. Sep 12 17:38:29.373901 systemd[1838]: Queued start job for default target default.target. Sep 12 17:38:29.380684 systemd[1838]: Created slice app.slice - User Application Slice. Sep 12 17:38:29.380719 systemd[1838]: Reached target paths.target - Paths. Sep 12 17:38:29.381080 systemd[1838]: Reached target timers.target - Timers. Sep 12 17:38:29.382438 systemd[1838]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:38:29.394781 systemd[1838]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:38:29.395015 systemd[1838]: Reached target sockets.target - Sockets. Sep 12 17:38:29.395039 systemd[1838]: Reached target basic.target - Basic System. Sep 12 17:38:29.395087 systemd[1838]: Reached target default.target - Main User Target. Sep 12 17:38:29.395122 systemd[1838]: Startup finished in 297ms. Sep 12 17:38:29.395521 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:38:29.402923 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:38:29.404128 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:38:29.891588 waagent[1809]: 2025-09-12T17:38:29.891487Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.891983Z INFO Daemon Daemon OS: flatcar 4081.3.6 Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.892125Z INFO Daemon Daemon Python: 3.11.9 Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.892901Z INFO Daemon Daemon Run daemon Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.893211Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.894009Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.894579Z INFO Daemon Daemon Activate resource disk Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.895352Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.899599Z INFO Daemon Daemon Found device: None Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.900492Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.901438Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.903707Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:38:29.929782 waagent[1809]: 2025-09-12T17:38:29.903999Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:38:29.934184 waagent[1809]: 2025-09-12T17:38:29.934103Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:38:29.941067 waagent[1809]: 2025-09-12T17:38:29.941006Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:38:29.949537 waagent[1809]: 2025-09-12T17:38:29.941231Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:38:29.949537 waagent[1809]: 2025-09-12T17:38:29.942094Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:38:30.047516 waagent[1809]: 2025-09-12T17:38:30.045010Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:38:30.072631 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:38:30.076146 waagent[1809]: 2025-09-12T17:38:30.076057Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:38:30.083284 waagent[1809]: 2025-09-12T17:38:30.076382Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:38:30.083284 waagent[1809]: 2025-09-12T17:38:30.077549Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:38:30.083284 waagent[1809]: 2025-09-12T17:38:30.078376Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:38:30.083284 waagent[1809]: 2025-09-12T17:38:30.079476Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:38:30.083284 waagent[1809]: 2025-09-12T17:38:30.080462Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:38:30.106318 waagent[1809]: 2025-09-12T17:38:30.106251Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:38:30.114951 waagent[1809]: 2025-09-12T17:38:30.106784Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:38:30.114951 waagent[1809]: 2025-09-12T17:38:30.107092Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:38:30.182084 waagent[1809]: 2025-09-12T17:38:30.181919Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:38:30.187267 waagent[1809]: 2025-09-12T17:38:30.182267Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:38:30.190378 waagent[1809]: 2025-09-12T17:38:30.190320Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:38:30.209211 waagent[1809]: 2025-09-12T17:38:30.209147Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Sep 12 17:38:30.210535 waagent[1809]: 2025-09-12T17:38:30.209973Z INFO Daemon Sep 12 17:38:30.210535 waagent[1809]: 2025-09-12T17:38:30.210507Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3128d911-ef9e-4922-9ac1-774944281bd0 eTag: 10934897393239967603 source: Fabric] Sep 12 17:38:30.214382 waagent[1809]: 2025-09-12T17:38:30.210957Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:38:30.214382 waagent[1809]: 2025-09-12T17:38:30.212095Z INFO Daemon Sep 12 17:38:30.214382 waagent[1809]: 2025-09-12T17:38:30.213233Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:38:30.219337 waagent[1809]: 2025-09-12T17:38:30.218458Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:38:30.292720 waagent[1809]: 2025-09-12T17:38:30.292631Z INFO Daemon Downloaded certificate {'thumbprint': '986349CF87702DF7C3E11231D1863A3BA57BB940', 'hasPrivateKey': True} Sep 12 17:38:30.299459 waagent[1809]: 2025-09-12T17:38:30.294079Z INFO Daemon Fetch goal state completed Sep 12 17:38:30.301874 waagent[1809]: 2025-09-12T17:38:30.301828Z INFO Daemon Daemon Starting provisioning Sep 12 17:38:30.302555 waagent[1809]: 2025-09-12T17:38:30.302053Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:38:30.302624 waagent[1809]: 2025-09-12T17:38:30.302543Z INFO Daemon Daemon Set hostname [ci-4081.3.6-a-da806c5a3d] Sep 12 17:38:30.340673 waagent[1809]: 2025-09-12T17:38:30.340587Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-a-da806c5a3d] Sep 12 17:38:30.348490 waagent[1809]: 2025-09-12T17:38:30.341116Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:38:30.348490 waagent[1809]: 2025-09-12T17:38:30.341618Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:38:30.377190 systemd-networkd[1594]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:30.377199 systemd-networkd[1594]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:30.377249 systemd-networkd[1594]: eth0: DHCP lease lost Sep 12 17:38:30.378635 waagent[1809]: 2025-09-12T17:38:30.378542Z INFO Daemon Daemon Create user account if not exists Sep 12 17:38:30.395394 waagent[1809]: 2025-09-12T17:38:30.378943Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:38:30.395394 waagent[1809]: 2025-09-12T17:38:30.379853Z INFO Daemon Daemon Configure sudoer Sep 12 17:38:30.395394 waagent[1809]: 2025-09-12T17:38:30.380555Z INFO Daemon Daemon Configure sshd Sep 12 17:38:30.395394 waagent[1809]: 2025-09-12T17:38:30.381340Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:38:30.395394 waagent[1809]: 2025-09-12T17:38:30.383072Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:38:30.396840 systemd-networkd[1594]: eth0: DHCPv6 lease lost Sep 12 17:38:30.427826 systemd-networkd[1594]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Sep 12 17:38:31.486588 waagent[1809]: 2025-09-12T17:38:31.486522Z INFO Daemon Daemon Provisioning complete Sep 12 17:38:31.500193 waagent[1809]: 2025-09-12T17:38:31.500124Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:38:31.509137 waagent[1809]: 2025-09-12T17:38:31.500514Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:38:31.509137 waagent[1809]: 2025-09-12T17:38:31.500678Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 17:38:31.627278 waagent[1891]: 2025-09-12T17:38:31.627178Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 17:38:31.627762 waagent[1891]: 2025-09-12T17:38:31.627348Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Sep 12 17:38:31.627762 waagent[1891]: 2025-09-12T17:38:31.627430Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 12 17:38:31.674453 waagent[1891]: 2025-09-12T17:38:31.674354Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 17:38:31.674693 waagent[1891]: 2025-09-12T17:38:31.674640Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:38:31.674802 waagent[1891]: 2025-09-12T17:38:31.674757Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:38:31.682912 waagent[1891]: 2025-09-12T17:38:31.682839Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:38:31.688890 waagent[1891]: 2025-09-12T17:38:31.688829Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Sep 12 17:38:31.689377 waagent[1891]: 2025-09-12T17:38:31.689318Z INFO ExtHandler Sep 12 17:38:31.689465 waagent[1891]: 2025-09-12T17:38:31.689412Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8464ad0e-f0fa-4f12-877c-43b7020af8fa eTag: 10934897393239967603 source: Fabric] Sep 12 17:38:31.689786 waagent[1891]: 2025-09-12T17:38:31.689720Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:38:31.690360 waagent[1891]: 2025-09-12T17:38:31.690301Z INFO ExtHandler Sep 12 17:38:31.690442 waagent[1891]: 2025-09-12T17:38:31.690387Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:38:31.693963 waagent[1891]: 2025-09-12T17:38:31.693914Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:38:31.762832 waagent[1891]: 2025-09-12T17:38:31.762633Z INFO ExtHandler Downloaded certificate {'thumbprint': '986349CF87702DF7C3E11231D1863A3BA57BB940', 'hasPrivateKey': True} Sep 12 17:38:31.763298 waagent[1891]: 2025-09-12T17:38:31.763237Z INFO ExtHandler Fetch goal state completed Sep 12 17:38:31.775990 waagent[1891]: 2025-09-12T17:38:31.775923Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1891 Sep 12 17:38:31.776149 waagent[1891]: 2025-09-12T17:38:31.776095Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:38:31.777759 waagent[1891]: 2025-09-12T17:38:31.777697Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:38:31.778143 waagent[1891]: 2025-09-12T17:38:31.778097Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:38:31.855003 waagent[1891]: 2025-09-12T17:38:31.854953Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:38:31.855247 waagent[1891]: 2025-09-12T17:38:31.855197Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:38:31.867230 waagent[1891]: 2025-09-12T17:38:31.867180Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:38:31.874469 systemd[1]: Reloading requested from client PID 1904 ('systemctl') (unit waagent.service)... Sep 12 17:38:31.874485 systemd[1]: Reloading... Sep 12 17:38:31.962839 zram_generator::config[1934]: No configuration found. Sep 12 17:38:32.097059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:32.179794 systemd[1]: Reloading finished in 304 ms. Sep 12 17:38:32.208771 waagent[1891]: 2025-09-12T17:38:32.206437Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 17:38:32.214347 systemd[1]: Reloading requested from client PID 1995 ('systemctl') (unit waagent.service)... Sep 12 17:38:32.214365 systemd[1]: Reloading... Sep 12 17:38:32.312763 zram_generator::config[2030]: No configuration found. Sep 12 17:38:32.427788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:32.510207 systemd[1]: Reloading finished in 295 ms. Sep 12 17:38:32.533728 waagent[1891]: 2025-09-12T17:38:32.533615Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:38:32.533890 waagent[1891]: 2025-09-12T17:38:32.533843Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:38:33.518789 waagent[1891]: 2025-09-12T17:38:33.518681Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:38:33.519477 waagent[1891]: 2025-09-12T17:38:33.519417Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 17:38:33.520345 waagent[1891]: 2025-09-12T17:38:33.520250Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:38:33.520948 waagent[1891]: 2025-09-12T17:38:33.520859Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:38:33.521181 waagent[1891]: 2025-09-12T17:38:33.521101Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:38:33.521267 waagent[1891]: 2025-09-12T17:38:33.521216Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:38:33.521391 waagent[1891]: 2025-09-12T17:38:33.521313Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:38:33.522093 waagent[1891]: 2025-09-12T17:38:33.522034Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:38:33.522274 waagent[1891]: 2025-09-12T17:38:33.522228Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:38:33.522655 waagent[1891]: 2025-09-12T17:38:33.522495Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:38:33.522826 waagent[1891]: 2025-09-12T17:38:33.522732Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:38:33.523077 waagent[1891]: 2025-09-12T17:38:33.523027Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:38:33.523077 waagent[1891]: 2025-09-12T17:38:33.523101Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:38:33.524004 waagent[1891]: 2025-09-12T17:38:33.523881Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:38:33.526279 waagent[1891]: 2025-09-12T17:38:33.526223Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:38:33.526917 waagent[1891]: 2025-09-12T17:38:33.526861Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:38:33.526917 waagent[1891]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:38:33.526917 waagent[1891]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:38:33.526917 waagent[1891]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:38:33.526917 waagent[1891]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:38:33.526917 waagent[1891]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:38:33.526917 waagent[1891]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:38:33.527210 waagent[1891]: 2025-09-12T17:38:33.526975Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:38:33.527210 waagent[1891]: 2025-09-12T17:38:33.527058Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:38:33.540079 waagent[1891]: 2025-09-12T17:38:33.539309Z INFO ExtHandler ExtHandler Sep 12 17:38:33.543762 waagent[1891]: 2025-09-12T17:38:33.542314Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ca498153-1054-4726-a838-16d89573c0a4 correlation 9b986975-7e27-4670-90d8-6c3983aa7248 created: 2025-09-12T17:37:17.820518Z] Sep 12 17:38:33.543762 waagent[1891]: 2025-09-12T17:38:33.542800Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:38:33.543762 waagent[1891]: 2025-09-12T17:38:33.543527Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Sep 12 17:38:33.582381 waagent[1891]: 2025-09-12T17:38:33.582312Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 200C20E4-99B2-4022-9B03-0D46083625CA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 17:38:33.599766 waagent[1891]: 2025-09-12T17:38:33.599673Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:38:33.599766 waagent[1891]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:38:33.599766 waagent[1891]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:38:33.599766 waagent[1891]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d2:7f:6c brd ff:ff:ff:ff:ff:ff Sep 12 17:38:33.599766 waagent[1891]: 3: enP25650s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d2:7f:6c brd ff:ff:ff:ff:ff:ff\ altname enP25650p0s2 Sep 12 17:38:33.599766 waagent[1891]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:38:33.599766 waagent[1891]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:38:33.599766 waagent[1891]: 2: eth0 inet 10.200.4.37/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:38:33.599766 waagent[1891]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:38:33.599766 waagent[1891]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:38:33.599766 waagent[1891]: 2: eth0 inet6 fe80::6245:bdff:fed2:7f6c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:38:33.682570 waagent[1891]: 2025-09-12T17:38:33.682503Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 17:38:33.682570 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:38:33.682570 waagent[1891]: pkts bytes target prot opt in out source destination Sep 12 17:38:33.682570 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:38:33.682570 waagent[1891]: pkts bytes target prot opt in out source destination Sep 12 17:38:33.682570 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:38:33.682570 waagent[1891]: pkts bytes target prot opt in out source destination Sep 12 17:38:33.682570 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:38:33.682570 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:38:33.682570 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:38:33.685855 waagent[1891]: 2025-09-12T17:38:33.685792Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:38:33.685855 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:38:33.685855 waagent[1891]: pkts bytes target prot opt in out source destination Sep 12 17:38:33.685855 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:38:33.685855 waagent[1891]: pkts bytes target prot opt in out source destination Sep 12 17:38:33.685855 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:38:33.685855 waagent[1891]: pkts bytes target prot opt in out source destination Sep 12 17:38:33.685855 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:38:33.685855 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:38:33.685855 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:38:33.686241 waagent[1891]: 2025-09-12T17:38:33.686108Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 17:38:39.405863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:38:39.411979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:39.516855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:39.529068 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:38:40.260782 kubelet[2125]: E0912 17:38:40.260708 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:38:40.264560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:38:40.264794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:38:50.397955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:38:50.403017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:50.414841 chronyd[1698]: Selected source PHC0 Sep 12 17:38:50.507706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:50.518058 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:38:51.235757 kubelet[2140]: E0912 17:38:51.235672 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:38:51.238224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:38:51.238393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:01.397966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:39:01.403989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:01.507689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:01.512379 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:01.550323 kubelet[2155]: E0912 17:39:01.550245 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:01.552825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:01.553039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:04.866077 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:39:04.867469 systemd[1]: Started sshd@0-10.200.4.37:22-10.200.16.10:50182.service - OpenSSH per-connection server daemon (10.200.16.10:50182). Sep 12 17:39:05.554142 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 50182 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:05.555946 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:05.559980 systemd-logind[1680]: New session 3 of user core. Sep 12 17:39:05.565931 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:39:06.084799 systemd[1]: Started sshd@1-10.200.4.37:22-10.200.16.10:50196.service - OpenSSH per-connection server daemon (10.200.16.10:50196). Sep 12 17:39:06.673179 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 50196 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:06.674900 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:06.678788 systemd-logind[1680]: New session 4 of user core. Sep 12 17:39:06.684915 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:39:07.299588 sshd[2169]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:07.303322 systemd[1]: sshd@1-10.200.4.37:22-10.200.16.10:50196.service: Deactivated successfully. Sep 12 17:39:07.305232 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:39:07.305926 systemd-logind[1680]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:39:07.306869 systemd-logind[1680]: Removed session 4. Sep 12 17:39:07.403529 systemd[1]: Started sshd@2-10.200.4.37:22-10.200.16.10:50198.service - OpenSSH per-connection server daemon (10.200.16.10:50198). Sep 12 17:39:07.870632 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 12 17:39:07.997169 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 50198 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:07.999012 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:08.003857 systemd-logind[1680]: New session 5 of user core. Sep 12 17:39:08.013912 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:39:08.412618 sshd[2176]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:08.416948 systemd[1]: sshd@2-10.200.4.37:22-10.200.16.10:50198.service: Deactivated successfully. Sep 12 17:39:08.419189 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:39:08.420144 systemd-logind[1680]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:39:08.421259 systemd-logind[1680]: Removed session 5. Sep 12 17:39:08.517077 systemd[1]: Started sshd@3-10.200.4.37:22-10.200.16.10:50206.service - OpenSSH per-connection server daemon (10.200.16.10:50206). Sep 12 17:39:09.103009 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 50206 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:09.104810 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:09.110813 systemd-logind[1680]: New session 6 of user core. Sep 12 17:39:09.115919 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:39:09.523596 sshd[2183]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:09.526442 systemd[1]: sshd@3-10.200.4.37:22-10.200.16.10:50206.service: Deactivated successfully. Sep 12 17:39:09.528874 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:39:09.530936 systemd-logind[1680]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:39:09.532001 systemd-logind[1680]: Removed session 6. Sep 12 17:39:09.634094 systemd[1]: Started sshd@4-10.200.4.37:22-10.200.16.10:50216.service - OpenSSH per-connection server daemon (10.200.16.10:50216). Sep 12 17:39:10.217358 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 50216 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:10.218875 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:10.225970 systemd-logind[1680]: New session 7 of user core. Sep 12 17:39:10.230930 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:39:10.743133 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:39:10.743541 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:10.768399 sudo[2193]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:10.863540 sshd[2190]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:10.867577 systemd[1]: sshd@4-10.200.4.37:22-10.200.16.10:50216.service: Deactivated successfully. Sep 12 17:39:10.869868 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:39:10.871446 systemd-logind[1680]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:39:10.872389 systemd-logind[1680]: Removed session 7. Sep 12 17:39:10.970853 systemd[1]: Started sshd@5-10.200.4.37:22-10.200.16.10:36714.service - OpenSSH per-connection server daemon (10.200.16.10:36714). Sep 12 17:39:11.559631 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 36714 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:11.561243 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:11.562147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:39:11.574430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:11.580067 systemd-logind[1680]: New session 8 of user core. Sep 12 17:39:11.582199 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:39:11.760022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:11.764593 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:11.885350 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:39:11.885719 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:11.889880 sudo[2215]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:11.895480 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:39:11.895868 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:11.911216 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:11.912948 auditctl[2218]: No rules Sep 12 17:39:11.913337 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:39:11.913554 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:11.916071 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:12.156162 update_engine[1683]: I20250912 17:39:12.156096 1683 update_attempter.cc:509] Updating boot flags... Sep 12 17:39:12.321294 augenrules[2241]: No rules Sep 12 17:39:12.321621 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:12.325103 sudo[2214]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:12.346578 kubelet[2209]: E0912 17:39:12.346526 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:12.349134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:12.349491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:12.380791 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2255) Sep 12 17:39:12.428903 sshd[2198]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:12.452114 systemd[1]: sshd@5-10.200.4.37:22-10.200.16.10:36714.service: Deactivated successfully. Sep 12 17:39:12.455359 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:39:12.462778 systemd-logind[1680]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:39:12.477159 systemd-logind[1680]: Removed session 8. Sep 12 17:39:12.491766 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2256) Sep 12 17:39:12.573910 systemd[1]: Started sshd@6-10.200.4.37:22-10.200.16.10:36728.service - OpenSSH per-connection server daemon (10.200.16.10:36728). Sep 12 17:39:12.612773 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2256) Sep 12 17:39:13.191044 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 36728 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:39:13.192532 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:13.197726 systemd-logind[1680]: New session 9 of user core. Sep 12 17:39:13.203935 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:39:13.517960 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:39:13.518422 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:15.045151 (dockerd)[2358]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:39:15.045595 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:39:16.858632 dockerd[2358]: time="2025-09-12T17:39:16.858564912Z" level=info msg="Starting up" Sep 12 17:39:17.473033 dockerd[2358]: time="2025-09-12T17:39:17.472980364Z" level=info msg="Loading containers: start." Sep 12 17:39:17.673836 kernel: Initializing XFRM netlink socket Sep 12 17:39:17.868446 systemd-networkd[1594]: docker0: Link UP Sep 12 17:39:17.892143 dockerd[2358]: time="2025-09-12T17:39:17.892102723Z" level=info msg="Loading containers: done." Sep 12 17:39:17.971386 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1218136540-merged.mount: Deactivated successfully. Sep 12 17:39:17.979065 dockerd[2358]: time="2025-09-12T17:39:17.978991165Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:39:17.979189 dockerd[2358]: time="2025-09-12T17:39:17.979145067Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:39:17.979298 dockerd[2358]: time="2025-09-12T17:39:17.979275368Z" level=info msg="Daemon has completed initialization" Sep 12 17:39:18.036313 dockerd[2358]: time="2025-09-12T17:39:18.036016217Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:39:18.036618 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:39:19.374779 containerd[1703]: time="2025-09-12T17:39:19.374724685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:39:20.189472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263151762.mount: Deactivated successfully. Sep 12 17:39:21.833341 containerd[1703]: time="2025-09-12T17:39:21.833281399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:21.835419 containerd[1703]: time="2025-09-12T17:39:21.835371619Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 12 17:39:21.837830 containerd[1703]: time="2025-09-12T17:39:21.837769742Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:21.841762 containerd[1703]: time="2025-09-12T17:39:21.841696180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:21.843204 containerd[1703]: time="2025-09-12T17:39:21.842704190Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.467925005s" Sep 12 17:39:21.843204 containerd[1703]: time="2025-09-12T17:39:21.842765891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:39:21.843706 containerd[1703]: time="2025-09-12T17:39:21.843627599Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:39:22.397894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:39:22.402996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:22.516462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:22.527084 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:23.166409 kubelet[2558]: E0912 17:39:23.166352 2558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:23.168934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:23.169154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:23.993197 containerd[1703]: time="2025-09-12T17:39:23.993137077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:23.995624 containerd[1703]: time="2025-09-12T17:39:23.995550300Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 12 17:39:23.998481 containerd[1703]: time="2025-09-12T17:39:23.998426928Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:24.004614 containerd[1703]: time="2025-09-12T17:39:24.004287185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:24.005351 containerd[1703]: time="2025-09-12T17:39:24.005311295Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.161599095s" Sep 12 17:39:24.005442 containerd[1703]: time="2025-09-12T17:39:24.005356896Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:39:24.006203 containerd[1703]: time="2025-09-12T17:39:24.006162204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:39:25.425758 containerd[1703]: time="2025-09-12T17:39:25.425692507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.428713 containerd[1703]: time="2025-09-12T17:39:25.428524534Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 12 17:39:25.431765 containerd[1703]: time="2025-09-12T17:39:25.431658165Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.436456 containerd[1703]: time="2025-09-12T17:39:25.436404211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.437475 containerd[1703]: time="2025-09-12T17:39:25.437438221Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.431143116s" Sep 12 17:39:25.437551 containerd[1703]: time="2025-09-12T17:39:25.437480721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:39:25.438109 containerd[1703]: time="2025-09-12T17:39:25.437974826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:39:26.668585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500572195.mount: Deactivated successfully. Sep 12 17:39:27.185358 containerd[1703]: time="2025-09-12T17:39:27.185299316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:27.187901 containerd[1703]: time="2025-09-12T17:39:27.187838441Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 12 17:39:27.190655 containerd[1703]: time="2025-09-12T17:39:27.190618468Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:27.194663 containerd[1703]: time="2025-09-12T17:39:27.194609407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:27.195353 containerd[1703]: time="2025-09-12T17:39:27.195198813Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.757190787s" Sep 12 17:39:27.195353 containerd[1703]: time="2025-09-12T17:39:27.195237513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:39:27.196134 containerd[1703]: time="2025-09-12T17:39:27.196110921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:39:27.859314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124637668.mount: Deactivated successfully. Sep 12 17:39:29.031462 containerd[1703]: time="2025-09-12T17:39:29.031405267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:29.033775 containerd[1703]: time="2025-09-12T17:39:29.033712390Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 12 17:39:29.037029 containerd[1703]: time="2025-09-12T17:39:29.036971621Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:29.043209 containerd[1703]: time="2025-09-12T17:39:29.043170882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:29.044406 containerd[1703]: time="2025-09-12T17:39:29.044261792Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.84812117s" Sep 12 17:39:29.044406 containerd[1703]: time="2025-09-12T17:39:29.044299393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:39:29.045128 containerd[1703]: time="2025-09-12T17:39:29.045101800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:39:29.575432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1718899608.mount: Deactivated successfully. Sep 12 17:39:29.590370 containerd[1703]: time="2025-09-12T17:39:29.590329002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:29.592422 containerd[1703]: time="2025-09-12T17:39:29.592372222Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 17:39:29.594950 containerd[1703]: time="2025-09-12T17:39:29.594899046Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:29.598781 containerd[1703]: time="2025-09-12T17:39:29.598710384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:29.599710 containerd[1703]: time="2025-09-12T17:39:29.599460891Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 554.32619ms" Sep 12 17:39:29.599710 containerd[1703]: time="2025-09-12T17:39:29.599498091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:39:29.600475 containerd[1703]: time="2025-09-12T17:39:29.600447400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:39:30.253897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609786890.mount: Deactivated successfully. Sep 12 17:39:32.430311 containerd[1703]: time="2025-09-12T17:39:32.430238670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:32.432932 containerd[1703]: time="2025-09-12T17:39:32.432878145Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 12 17:39:32.436357 containerd[1703]: time="2025-09-12T17:39:32.436294943Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:32.441547 containerd[1703]: time="2025-09-12T17:39:32.441493991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:32.442858 containerd[1703]: time="2025-09-12T17:39:32.442649724Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.842173123s" Sep 12 17:39:32.442858 containerd[1703]: time="2025-09-12T17:39:32.442688825Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:39:33.203641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 12 17:39:33.211138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:33.702993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:33.716246 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:33.986575 kubelet[2718]: E0912 17:39:33.986148 2718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:33.989775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:33.989988 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:35.380858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:35.387037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:35.425887 systemd[1]: Reloading requested from client PID 2733 ('systemctl') (unit session-9.scope)... Sep 12 17:39:35.425913 systemd[1]: Reloading... Sep 12 17:39:35.531774 zram_generator::config[2769]: No configuration found. Sep 12 17:39:35.687386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:35.768559 systemd[1]: Reloading finished in 342 ms. Sep 12 17:39:35.830547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:35.834630 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:35.834957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:35.841272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:36.306334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:36.321073 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:36.356328 kubelet[2845]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:36.356328 kubelet[2845]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:36.356328 kubelet[2845]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:36.356821 kubelet[2845]: I0912 17:39:36.356419 2845 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:36.720941 kubelet[2845]: I0912 17:39:36.720895 2845 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:39:36.720941 kubelet[2845]: I0912 17:39:36.720929 2845 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:36.721285 kubelet[2845]: I0912 17:39:36.721264 2845 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:39:37.091698 kubelet[2845]: I0912 17:39:37.091055 2845 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:37.092958 kubelet[2845]: E0912 17:39:37.092916 2845 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:37.100788 kubelet[2845]: E0912 17:39:37.100700 2845 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:37.100788 kubelet[2845]: I0912 17:39:37.100767 2845 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:37.104329 kubelet[2845]: I0912 17:39:37.104299 2845 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:37.105827 kubelet[2845]: I0912 17:39:37.105778 2845 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:37.106018 kubelet[2845]: I0912 17:39:37.105825 2845 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-da806c5a3d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:39:37.106171 kubelet[2845]: I0912 17:39:37.106030 2845 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:37.106171 kubelet[2845]: I0912 17:39:37.106043 2845 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:39:37.106252 kubelet[2845]: I0912 17:39:37.106199 2845 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:37.109803 kubelet[2845]: I0912 17:39:37.109777 2845 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:39:37.109896 kubelet[2845]: I0912 17:39:37.109811 2845 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:37.109896 kubelet[2845]: I0912 17:39:37.109835 2845 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:39:37.109896 kubelet[2845]: I0912 17:39:37.109849 2845 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:37.119098 kubelet[2845]: W0912 17:39:37.117987 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:37.119098 kubelet[2845]: E0912 17:39:37.118074 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:37.119098 kubelet[2845]: W0912 17:39:37.118162 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-da806c5a3d&limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:37.119098 kubelet[2845]: E0912 17:39:37.118200 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-da806c5a3d&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:37.119098 kubelet[2845]: I0912 17:39:37.118572 2845 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:37.119098 kubelet[2845]: I0912 17:39:37.119004 2845 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:39:37.119781 kubelet[2845]: W0912 17:39:37.119734 2845 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:39:37.122561 kubelet[2845]: I0912 17:39:37.122538 2845 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:39:37.122643 kubelet[2845]: I0912 17:39:37.122582 2845 server.go:1287] "Started kubelet" Sep 12 17:39:37.124762 kubelet[2845]: I0912 17:39:37.122760 2845 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:37.124762 kubelet[2845]: I0912 17:39:37.123615 2845 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:39:37.124762 kubelet[2845]: I0912 17:39:37.123882 2845 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:37.124762 kubelet[2845]: I0912 17:39:37.124293 2845 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:37.125774 kubelet[2845]: I0912 17:39:37.125731 2845 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:37.132582 kubelet[2845]: E0912 17:39:37.130649 2845 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-da806c5a3d.186499b9a0873e80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-da806c5a3d,UID:ci-4081.3.6-a-da806c5a3d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-da806c5a3d,},FirstTimestamp:2025-09-12 17:39:37.122553472 +0000 UTC m=+0.798300047,LastTimestamp:2025-09-12 17:39:37.122553472 +0000 UTC m=+0.798300047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-da806c5a3d,}" Sep 12 17:39:37.134764 kubelet[2845]: I0912 17:39:37.132883 2845 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:37.134764 kubelet[2845]: I0912 17:39:37.134196 2845 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:39:37.134764 kubelet[2845]: E0912 17:39:37.134424 2845 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" Sep 12 17:39:37.135091 kubelet[2845]: E0912 17:39:37.135057 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-da806c5a3d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="200ms" Sep 12 17:39:37.135277 kubelet[2845]: I0912 17:39:37.135252 2845 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:39:37.135373 kubelet[2845]: I0912 17:39:37.135353 2845 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:37.137877 kubelet[2845]: I0912 17:39:37.137854 2845 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:39:37.138140 kubelet[2845]: I0912 17:39:37.138123 2845 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:39:37.138268 kubelet[2845]: I0912 17:39:37.138257 2845 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:37.144079 kubelet[2845]: W0912 17:39:37.144035 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:37.144214 kubelet[2845]: E0912 17:39:37.144196 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:37.165795 kubelet[2845]: E0912 17:39:37.165761 2845 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:37.178140 kubelet[2845]: I0912 17:39:37.178119 2845 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:39:37.178313 kubelet[2845]: I0912 17:39:37.178294 2845 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:37.178406 kubelet[2845]: I0912 17:39:37.178396 2845 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:37.185002 kubelet[2845]: I0912 17:39:37.184968 2845 policy_none.go:49] "None policy: Start" Sep 12 17:39:37.185002 kubelet[2845]: I0912 17:39:37.184994 2845 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:39:37.185002 kubelet[2845]: I0912 17:39:37.185009 2845 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:37.195815 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:39:37.206667 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:39:37.210025 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:39:37.219020 kubelet[2845]: I0912 17:39:37.218415 2845 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:39:37.219020 kubelet[2845]: I0912 17:39:37.218635 2845 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:37.219020 kubelet[2845]: I0912 17:39:37.218648 2845 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:37.219020 kubelet[2845]: I0912 17:39:37.218924 2845 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:37.220492 kubelet[2845]: E0912 17:39:37.220468 2845 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:39:37.220663 kubelet[2845]: E0912 17:39:37.220640 2845 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-a-da806c5a3d\" not found" Sep 12 17:39:37.283602 kubelet[2845]: I0912 17:39:37.283449 2845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:37.286340 kubelet[2845]: I0912 17:39:37.285958 2845 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:37.286340 kubelet[2845]: I0912 17:39:37.285988 2845 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:39:37.286340 kubelet[2845]: I0912 17:39:37.286015 2845 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:39:37.286340 kubelet[2845]: I0912 17:39:37.286026 2845 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:39:37.286340 kubelet[2845]: E0912 17:39:37.286081 2845 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 12 17:39:37.288025 kubelet[2845]: W0912 17:39:37.287975 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:37.288525 kubelet[2845]: E0912 17:39:37.288461 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:37.321505 kubelet[2845]: I0912 17:39:37.321466 2845 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.322147 kubelet[2845]: E0912 17:39:37.322111 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.335688 kubelet[2845]: E0912 17:39:37.335654 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-da806c5a3d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="400ms" Sep 12 17:39:37.399153 systemd[1]: Created slice kubepods-burstable-pod04f59ac08c0b5640cecb83b3068dd475.slice - libcontainer container kubepods-burstable-pod04f59ac08c0b5640cecb83b3068dd475.slice. Sep 12 17:39:37.407404 kubelet[2845]: E0912 17:39:37.407372 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.410623 systemd[1]: Created slice kubepods-burstable-pod875c2855f42781a2ce5ea8abdb83db63.slice - libcontainer container kubepods-burstable-pod875c2855f42781a2ce5ea8abdb83db63.slice. Sep 12 17:39:37.420841 kubelet[2845]: E0912 17:39:37.420812 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.423665 systemd[1]: Created slice kubepods-burstable-poda9cdc148897f0bf694b5e9ca30c0e472.slice - libcontainer container kubepods-burstable-poda9cdc148897f0bf694b5e9ca30c0e472.slice. Sep 12 17:39:37.425422 kubelet[2845]: E0912 17:39:37.425400 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440802 kubelet[2845]: I0912 17:39:37.440750 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04f59ac08c0b5640cecb83b3068dd475-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" (UID: \"04f59ac08c0b5640cecb83b3068dd475\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440802 kubelet[2845]: I0912 17:39:37.440786 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04f59ac08c0b5640cecb83b3068dd475-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" (UID: \"04f59ac08c0b5640cecb83b3068dd475\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440969 kubelet[2845]: I0912 17:39:37.440820 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440969 kubelet[2845]: I0912 17:39:37.440844 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440969 kubelet[2845]: I0912 17:39:37.440864 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04f59ac08c0b5640cecb83b3068dd475-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" (UID: \"04f59ac08c0b5640cecb83b3068dd475\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440969 kubelet[2845]: I0912 17:39:37.440886 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.440969 kubelet[2845]: I0912 17:39:37.440906 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.441129 kubelet[2845]: I0912 17:39:37.440927 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.441129 kubelet[2845]: I0912 17:39:37.440950 2845 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9cdc148897f0bf694b5e9ca30c0e472-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-da806c5a3d\" (UID: \"a9cdc148897f0bf694b5e9ca30c0e472\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.524600 kubelet[2845]: I0912 17:39:37.524565 2845 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.524992 kubelet[2845]: E0912 17:39:37.524962 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.708855 containerd[1703]: time="2025-09-12T17:39:37.708689373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-da806c5a3d,Uid:04f59ac08c0b5640cecb83b3068dd475,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:37.721913 containerd[1703]: time="2025-09-12T17:39:37.721859949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-da806c5a3d,Uid:875c2855f42781a2ce5ea8abdb83db63,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:37.729217 containerd[1703]: time="2025-09-12T17:39:37.729176557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-da806c5a3d,Uid:a9cdc148897f0bf694b5e9ca30c0e472,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:37.737001 kubelet[2845]: E0912 17:39:37.736956 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-da806c5a3d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="800ms" Sep 12 17:39:37.926919 kubelet[2845]: I0912 17:39:37.926883 2845 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:37.927312 kubelet[2845]: E0912 17:39:37.927283 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:38.008362 kubelet[2845]: W0912 17:39:38.008254 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:38.008362 kubelet[2845]: E0912 17:39:38.008301 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:38.326157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881196929.mount: Deactivated successfully. Sep 12 17:39:38.345709 containerd[1703]: time="2025-09-12T17:39:38.345654423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:38.348642 containerd[1703]: time="2025-09-12T17:39:38.348478203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 12 17:39:38.351269 containerd[1703]: time="2025-09-12T17:39:38.351232182Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:38.354530 containerd[1703]: time="2025-09-12T17:39:38.354482775Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:38.357013 containerd[1703]: time="2025-09-12T17:39:38.356973946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:38.359734 containerd[1703]: time="2025-09-12T17:39:38.359698223Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:38.362018 containerd[1703]: time="2025-09-12T17:39:38.361966288Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:38.366020 containerd[1703]: time="2025-09-12T17:39:38.365971002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:38.366930 containerd[1703]: time="2025-09-12T17:39:38.366683422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.750271ms" Sep 12 17:39:38.368648 containerd[1703]: time="2025-09-12T17:39:38.368613677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.367218ms" Sep 12 17:39:38.369144 containerd[1703]: time="2025-09-12T17:39:38.369116092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.323115ms" Sep 12 17:39:38.538180 kubelet[2845]: E0912 17:39:38.538130 2845 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-da806c5a3d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="1.6s" Sep 12 17:39:38.621796 kubelet[2845]: W0912 17:39:38.621629 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:38.621796 kubelet[2845]: E0912 17:39:38.621695 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:38.662033 kubelet[2845]: W0912 17:39:38.661995 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:38.662276 kubelet[2845]: E0912 17:39:38.662040 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:38.699927 kubelet[2845]: W0912 17:39:38.699867 2845 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-da806c5a3d&limit=500&resourceVersion=0": dial tcp 10.200.4.37:6443: connect: connection refused Sep 12 17:39:38.700077 kubelet[2845]: E0912 17:39:38.699934 2845 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-da806c5a3d&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:38.729862 kubelet[2845]: I0912 17:39:38.729823 2845 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:38.730264 kubelet[2845]: E0912 17:39:38.730223 2845 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:39.083106 containerd[1703]: time="2025-09-12T17:39:39.082992574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:39.083106 containerd[1703]: time="2025-09-12T17:39:39.083062701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:39.084590 containerd[1703]: time="2025-09-12T17:39:39.083085810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:39.084590 containerd[1703]: time="2025-09-12T17:39:39.083172344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:39.085875 containerd[1703]: time="2025-09-12T17:39:39.085541866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:39.085875 containerd[1703]: time="2025-09-12T17:39:39.085606391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:39.085875 containerd[1703]: time="2025-09-12T17:39:39.085629200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:39.085875 containerd[1703]: time="2025-09-12T17:39:39.085720836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:39.088031 containerd[1703]: time="2025-09-12T17:39:39.087329562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:39.088031 containerd[1703]: time="2025-09-12T17:39:39.087878676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:39.088300 containerd[1703]: time="2025-09-12T17:39:39.088204603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:39.091626 containerd[1703]: time="2025-09-12T17:39:39.091290004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:39.132968 systemd[1]: Started cri-containerd-0ab19d4ffec391424d2a09061475975487b83a3a2a34eab1c3aea5a4535de323.scope - libcontainer container 0ab19d4ffec391424d2a09061475975487b83a3a2a34eab1c3aea5a4535de323. Sep 12 17:39:39.135357 systemd[1]: Started cri-containerd-17b25c94c95a589b10f6f8c1b341cfb2f7b9e81ced3de3f28d4556b25641d69f.scope - libcontainer container 17b25c94c95a589b10f6f8c1b341cfb2f7b9e81ced3de3f28d4556b25641d69f. Sep 12 17:39:39.138041 systemd[1]: Started cri-containerd-844056cf0d0daa81747e0b3bd521e1027ae9e5f0a674b5da4a6ffd4693994210.scope - libcontainer container 844056cf0d0daa81747e0b3bd521e1027ae9e5f0a674b5da4a6ffd4693994210. Sep 12 17:39:39.198356 containerd[1703]: time="2025-09-12T17:39:39.196727447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-da806c5a3d,Uid:04f59ac08c0b5640cecb83b3068dd475,Namespace:kube-system,Attempt:0,} returns sandbox id \"844056cf0d0daa81747e0b3bd521e1027ae9e5f0a674b5da4a6ffd4693994210\"" Sep 12 17:39:39.202862 containerd[1703]: time="2025-09-12T17:39:39.202712177Z" level=info msg="CreateContainer within sandbox \"844056cf0d0daa81747e0b3bd521e1027ae9e5f0a674b5da4a6ffd4693994210\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:39:39.225193 containerd[1703]: time="2025-09-12T17:39:39.225105994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-da806c5a3d,Uid:875c2855f42781a2ce5ea8abdb83db63,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ab19d4ffec391424d2a09061475975487b83a3a2a34eab1c3aea5a4535de323\"" Sep 12 17:39:39.228769 containerd[1703]: time="2025-09-12T17:39:39.228652574Z" level=info msg="CreateContainer within sandbox \"0ab19d4ffec391424d2a09061475975487b83a3a2a34eab1c3aea5a4535de323\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:39:39.234948 containerd[1703]: time="2025-09-12T17:39:39.234721837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-da806c5a3d,Uid:a9cdc148897f0bf694b5e9ca30c0e472,Namespace:kube-system,Attempt:0,} returns sandbox id \"17b25c94c95a589b10f6f8c1b341cfb2f7b9e81ced3de3f28d4556b25641d69f\"" Sep 12 17:39:39.236963 kubelet[2845]: E0912 17:39:39.236844 2845 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:39.240920 containerd[1703]: time="2025-09-12T17:39:39.240734077Z" level=info msg="CreateContainer within sandbox \"17b25c94c95a589b10f6f8c1b341cfb2f7b9e81ced3de3f28d4556b25641d69f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:39:39.241366 containerd[1703]: time="2025-09-12T17:39:39.241143637Z" level=info msg="CreateContainer within sandbox \"844056cf0d0daa81747e0b3bd521e1027ae9e5f0a674b5da4a6ffd4693994210\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5309a0a5a546b9720e032e60c337fece92ca3b2f32cfaf8eed03f27e6755322e\"" Sep 12 17:39:39.242061 containerd[1703]: time="2025-09-12T17:39:39.241971059Z" level=info msg="StartContainer for \"5309a0a5a546b9720e032e60c337fece92ca3b2f32cfaf8eed03f27e6755322e\"" Sep 12 17:39:39.276124 systemd[1]: Started cri-containerd-5309a0a5a546b9720e032e60c337fece92ca3b2f32cfaf8eed03f27e6755322e.scope - libcontainer container 5309a0a5a546b9720e032e60c337fece92ca3b2f32cfaf8eed03f27e6755322e. Sep 12 17:39:39.276972 containerd[1703]: time="2025-09-12T17:39:39.276902857Z" level=info msg="CreateContainer within sandbox \"0ab19d4ffec391424d2a09061475975487b83a3a2a34eab1c3aea5a4535de323\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7492294f5b4f8443676d23c5a9e958f80a06d087ec7a9897c652eff752d7c87\"" Sep 12 17:39:39.277961 containerd[1703]: time="2025-09-12T17:39:39.277929056Z" level=info msg="StartContainer for \"c7492294f5b4f8443676d23c5a9e958f80a06d087ec7a9897c652eff752d7c87\"" Sep 12 17:39:39.291265 containerd[1703]: time="2025-09-12T17:39:39.291222231Z" level=info msg="CreateContainer within sandbox \"17b25c94c95a589b10f6f8c1b341cfb2f7b9e81ced3de3f28d4556b25641d69f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7313d78a749ac85b62a9d1be5b5306d606f6a08fa116779c4032eb9953383172\"" Sep 12 17:39:39.293102 containerd[1703]: time="2025-09-12T17:39:39.293051343Z" level=info msg="StartContainer for \"7313d78a749ac85b62a9d1be5b5306d606f6a08fa116779c4032eb9953383172\"" Sep 12 17:39:39.352436 systemd[1]: run-containerd-runc-k8s.io-c7492294f5b4f8443676d23c5a9e958f80a06d087ec7a9897c652eff752d7c87-runc.Om4vF8.mount: Deactivated successfully. Sep 12 17:39:39.363357 systemd[1]: Started cri-containerd-c7492294f5b4f8443676d23c5a9e958f80a06d087ec7a9897c652eff752d7c87.scope - libcontainer container c7492294f5b4f8443676d23c5a9e958f80a06d087ec7a9897c652eff752d7c87. Sep 12 17:39:39.389981 systemd[1]: Started cri-containerd-7313d78a749ac85b62a9d1be5b5306d606f6a08fa116779c4032eb9953383172.scope - libcontainer container 7313d78a749ac85b62a9d1be5b5306d606f6a08fa116779c4032eb9953383172. Sep 12 17:39:39.400040 containerd[1703]: time="2025-09-12T17:39:39.399987170Z" level=info msg="StartContainer for \"5309a0a5a546b9720e032e60c337fece92ca3b2f32cfaf8eed03f27e6755322e\" returns successfully" Sep 12 17:39:39.454924 containerd[1703]: time="2025-09-12T17:39:39.454871634Z" level=info msg="StartContainer for \"c7492294f5b4f8443676d23c5a9e958f80a06d087ec7a9897c652eff752d7c87\" returns successfully" Sep 12 17:39:39.509787 containerd[1703]: time="2025-09-12T17:39:39.509673267Z" level=info msg="StartContainer for \"7313d78a749ac85b62a9d1be5b5306d606f6a08fa116779c4032eb9953383172\" returns successfully" Sep 12 17:39:40.315894 systemd[1]: run-containerd-runc-k8s.io-7313d78a749ac85b62a9d1be5b5306d606f6a08fa116779c4032eb9953383172-runc.EdQF3z.mount: Deactivated successfully. Sep 12 17:39:40.319459 kubelet[2845]: E0912 17:39:40.319086 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:40.325251 kubelet[2845]: E0912 17:39:40.325029 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:40.327792 kubelet[2845]: E0912 17:39:40.327769 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:40.332686 kubelet[2845]: I0912 17:39:40.332670 2845 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:41.332809 kubelet[2845]: E0912 17:39:41.330488 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:41.334287 kubelet[2845]: E0912 17:39:41.333625 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:41.334287 kubelet[2845]: E0912 17:39:41.334096 2845 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.805422 kubelet[2845]: I0912 17:39:42.805274 2845 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.805422 kubelet[2845]: E0912 17:39:42.805311 2845 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-a-da806c5a3d\": node \"ci-4081.3.6-a-da806c5a3d\" not found" Sep 12 17:39:42.807968 kubelet[2845]: I0912 17:39:42.807510 2845 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.807968 kubelet[2845]: I0912 17:39:42.807800 2845 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.825171 kubelet[2845]: W0912 17:39:42.825124 2845 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:42.829767 kubelet[2845]: W0912 17:39:42.829333 2845 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:42.835187 kubelet[2845]: I0912 17:39:42.835158 2845 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.844437 kubelet[2845]: W0912 17:39:42.844411 2845 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:42.844564 kubelet[2845]: E0912 17:39:42.844464 2845 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.844564 kubelet[2845]: I0912 17:39:42.844482 2845 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.850975 kubelet[2845]: W0912 17:39:42.850902 2845 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:42.851205 kubelet[2845]: I0912 17:39:42.851183 2845 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:42.859151 kubelet[2845]: W0912 17:39:42.859122 2845 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:42.859250 kubelet[2845]: E0912 17:39:42.859181 2845 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:43.547170 kubelet[2845]: I0912 17:39:43.547120 2845 apiserver.go:52] "Watching apiserver" Sep 12 17:39:43.639305 kubelet[2845]: I0912 17:39:43.639261 2845 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:39:44.978940 systemd[1]: Reloading requested from client PID 3117 ('systemctl') (unit session-9.scope)... Sep 12 17:39:44.978957 systemd[1]: Reloading... Sep 12 17:39:45.096779 zram_generator::config[3163]: No configuration found. Sep 12 17:39:45.215016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:45.316437 systemd[1]: Reloading finished in 337 ms. Sep 12 17:39:45.359601 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:45.380254 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:45.380538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:45.387660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:45.800366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:45.813088 (kubelet)[3224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:45.865156 kubelet[3224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:45.866758 kubelet[3224]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:45.866758 kubelet[3224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:45.866758 kubelet[3224]: I0912 17:39:45.865700 3224 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:45.874231 kubelet[3224]: I0912 17:39:45.874206 3224 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:39:45.874231 kubelet[3224]: I0912 17:39:45.874226 3224 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:45.874496 kubelet[3224]: I0912 17:39:45.874475 3224 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:39:45.876064 kubelet[3224]: I0912 17:39:45.876042 3224 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:39:45.879764 kubelet[3224]: I0912 17:39:45.879053 3224 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:45.883868 kubelet[3224]: E0912 17:39:45.883840 3224 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:45.883868 kubelet[3224]: I0912 17:39:45.883868 3224 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:45.887647 kubelet[3224]: I0912 17:39:45.887622 3224 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:45.887876 kubelet[3224]: I0912 17:39:45.887834 3224 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:45.888044 kubelet[3224]: I0912 17:39:45.887868 3224 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-da806c5a3d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:39:45.888196 kubelet[3224]: I0912 17:39:45.888052 3224 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:45.888196 kubelet[3224]: I0912 17:39:45.888066 3224 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:39:45.888196 kubelet[3224]: I0912 17:39:45.888122 3224 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:45.888713 kubelet[3224]: I0912 17:39:45.888683 3224 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:39:45.889187 kubelet[3224]: I0912 17:39:45.889087 3224 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:45.889187 kubelet[3224]: I0912 17:39:45.889120 3224 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:39:45.889187 kubelet[3224]: I0912 17:39:45.889133 3224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:45.892571 kubelet[3224]: I0912 17:39:45.892474 3224 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:45.893157 kubelet[3224]: I0912 17:39:45.893137 3224 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:39:45.893695 kubelet[3224]: I0912 17:39:45.893601 3224 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:39:45.893695 kubelet[3224]: I0912 17:39:45.893632 3224 server.go:1287] "Started kubelet" Sep 12 17:39:45.898580 kubelet[3224]: I0912 17:39:45.898560 3224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:45.907805 kubelet[3224]: I0912 17:39:45.906550 3224 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:45.908386 kubelet[3224]: I0912 17:39:45.908191 3224 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:39:45.912056 kubelet[3224]: I0912 17:39:45.912008 3224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:45.913266 kubelet[3224]: I0912 17:39:45.913218 3224 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:45.915019 kubelet[3224]: I0912 17:39:45.914972 3224 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:45.920503 kubelet[3224]: I0912 17:39:45.920345 3224 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:39:45.920778 kubelet[3224]: E0912 17:39:45.920761 3224 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-da806c5a3d\" not found" Sep 12 17:39:45.924052 kubelet[3224]: I0912 17:39:45.924028 3224 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:39:45.924172 kubelet[3224]: I0912 17:39:45.924144 3224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:45.925550 kubelet[3224]: E0912 17:39:45.924926 3224 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:45.927688 kubelet[3224]: I0912 17:39:45.927673 3224 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:39:45.927991 kubelet[3224]: I0912 17:39:45.927978 3224 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:45.928392 kubelet[3224]: I0912 17:39:45.928375 3224 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:39:45.929860 kubelet[3224]: I0912 17:39:45.929822 3224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:45.931074 kubelet[3224]: I0912 17:39:45.931049 3224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:45.931074 kubelet[3224]: I0912 17:39:45.931076 3224 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:39:45.931192 kubelet[3224]: I0912 17:39:45.931098 3224 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:39:45.931192 kubelet[3224]: I0912 17:39:45.931106 3224 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:39:45.931192 kubelet[3224]: E0912 17:39:45.931151 3224 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:45.974817 kubelet[3224]: I0912 17:39:45.974788 3224 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:39:45.974817 kubelet[3224]: I0912 17:39:45.974823 3224 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:45.975018 kubelet[3224]: I0912 17:39:45.974845 3224 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:45.975064 kubelet[3224]: I0912 17:39:45.975027 3224 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:39:45.975064 kubelet[3224]: I0912 17:39:45.975041 3224 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:39:45.975064 kubelet[3224]: I0912 17:39:45.975063 3224 policy_none.go:49] "None policy: Start" Sep 12 17:39:45.975188 kubelet[3224]: I0912 17:39:45.975075 3224 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:39:45.975188 kubelet[3224]: I0912 17:39:45.975088 3224 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:45.975268 kubelet[3224]: I0912 17:39:45.975210 3224 state_mem.go:75] "Updated machine memory state" Sep 12 17:39:45.979325 kubelet[3224]: I0912 17:39:45.979028 3224 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:39:45.979577 kubelet[3224]: I0912 17:39:45.979526 3224 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:45.979577 kubelet[3224]: I0912 17:39:45.979543 3224 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:45.980197 kubelet[3224]: I0912 17:39:45.980178 3224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:45.981385 kubelet[3224]: E0912 17:39:45.981362 3224 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:39:46.032545 kubelet[3224]: I0912 17:39:46.032003 3224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.032545 kubelet[3224]: I0912 17:39:46.032111 3224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.032545 kubelet[3224]: I0912 17:39:46.032372 3224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.046632 kubelet[3224]: W0912 17:39:46.046486 3224 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:46.046632 kubelet[3224]: E0912 17:39:46.046556 3224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.047041 kubelet[3224]: W0912 17:39:46.047006 3224 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:46.047140 kubelet[3224]: E0912 17:39:46.047063 3224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.047502 kubelet[3224]: W0912 17:39:46.047481 3224 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:46.047601 kubelet[3224]: E0912 17:39:46.047528 3224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.083944 kubelet[3224]: I0912 17:39:46.083700 3224 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.093385 kubelet[3224]: I0912 17:39:46.093346 3224 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.093527 kubelet[3224]: I0912 17:39:46.093431 3224 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129099 kubelet[3224]: I0912 17:39:46.129056 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9cdc148897f0bf694b5e9ca30c0e472-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-da806c5a3d\" (UID: \"a9cdc148897f0bf694b5e9ca30c0e472\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129099 kubelet[3224]: I0912 17:39:46.129100 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04f59ac08c0b5640cecb83b3068dd475-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" (UID: \"04f59ac08c0b5640cecb83b3068dd475\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129099 kubelet[3224]: I0912 17:39:46.129127 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129099 kubelet[3224]: I0912 17:39:46.129152 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129500 kubelet[3224]: I0912 17:39:46.129174 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129500 kubelet[3224]: I0912 17:39:46.129194 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129500 kubelet[3224]: I0912 17:39:46.129213 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04f59ac08c0b5640cecb83b3068dd475-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" (UID: \"04f59ac08c0b5640cecb83b3068dd475\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129500 kubelet[3224]: I0912 17:39:46.129244 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04f59ac08c0b5640cecb83b3068dd475-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" (UID: \"04f59ac08c0b5640cecb83b3068dd475\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.129500 kubelet[3224]: I0912 17:39:46.129263 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/875c2855f42781a2ce5ea8abdb83db63-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-da806c5a3d\" (UID: \"875c2855f42781a2ce5ea8abdb83db63\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.892604 kubelet[3224]: I0912 17:39:46.892503 3224 apiserver.go:52] "Watching apiserver" Sep 12 17:39:46.928436 kubelet[3224]: I0912 17:39:46.928392 3224 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:39:46.959815 kubelet[3224]: I0912 17:39:46.959780 3224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.960562 kubelet[3224]: I0912 17:39:46.960522 3224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.969947 kubelet[3224]: W0912 17:39:46.969922 3224 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:46.970035 kubelet[3224]: W0912 17:39:46.969961 3224 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:46.970035 kubelet[3224]: E0912 17:39:46.969997 3224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.970788 kubelet[3224]: E0912 17:39:46.970190 3224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-da806c5a3d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" Sep 12 17:39:46.990602 kubelet[3224]: I0912 17:39:46.990522 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-a-da806c5a3d" podStartSLOduration=4.990495105 podStartE2EDuration="4.990495105s" podCreationTimestamp="2025-09-12 17:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:46.981804734 +0000 UTC m=+1.163931589" watchObservedRunningTime="2025-09-12 17:39:46.990495105 +0000 UTC m=+1.172621860" Sep 12 17:39:46.999625 kubelet[3224]: I0912 17:39:46.999579 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-da806c5a3d" podStartSLOduration=4.9995648710000005 podStartE2EDuration="4.999564871s" podCreationTimestamp="2025-09-12 17:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:46.999339215 +0000 UTC m=+1.181466070" watchObservedRunningTime="2025-09-12 17:39:46.999564871 +0000 UTC m=+1.181691626" Sep 12 17:39:46.999834 kubelet[3224]: I0912 17:39:46.999652 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-a-da806c5a3d" podStartSLOduration=4.999645391 podStartE2EDuration="4.999645391s" podCreationTimestamp="2025-09-12 17:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:46.99111666 +0000 UTC m=+1.173243516" watchObservedRunningTime="2025-09-12 17:39:46.999645391 +0000 UTC m=+1.181772146" Sep 12 17:39:49.519465 kubelet[3224]: I0912 17:39:49.519423 3224 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:39:49.519964 containerd[1703]: time="2025-09-12T17:39:49.519927198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:39:49.520285 kubelet[3224]: I0912 17:39:49.520135 3224 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:39:50.309469 systemd[1]: Created slice kubepods-besteffort-pod51dd8025_2fac_4e9c_afd8_303555ac5609.slice - libcontainer container kubepods-besteffort-pod51dd8025_2fac_4e9c_afd8_303555ac5609.slice. Sep 12 17:39:50.358278 kubelet[3224]: I0912 17:39:50.358243 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51dd8025-2fac-4e9c-afd8-303555ac5609-lib-modules\") pod \"kube-proxy-m64kc\" (UID: \"51dd8025-2fac-4e9c-afd8-303555ac5609\") " pod="kube-system/kube-proxy-m64kc" Sep 12 17:39:50.358278 kubelet[3224]: I0912 17:39:50.358279 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51dd8025-2fac-4e9c-afd8-303555ac5609-xtables-lock\") pod \"kube-proxy-m64kc\" (UID: \"51dd8025-2fac-4e9c-afd8-303555ac5609\") " pod="kube-system/kube-proxy-m64kc" Sep 12 17:39:50.358567 kubelet[3224]: I0912 17:39:50.358306 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51dd8025-2fac-4e9c-afd8-303555ac5609-kube-proxy\") pod \"kube-proxy-m64kc\" (UID: \"51dd8025-2fac-4e9c-afd8-303555ac5609\") " pod="kube-system/kube-proxy-m64kc" Sep 12 17:39:50.358567 kubelet[3224]: I0912 17:39:50.358329 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb7w\" (UniqueName: \"kubernetes.io/projected/51dd8025-2fac-4e9c-afd8-303555ac5609-kube-api-access-5kb7w\") pod \"kube-proxy-m64kc\" (UID: \"51dd8025-2fac-4e9c-afd8-303555ac5609\") " pod="kube-system/kube-proxy-m64kc" Sep 12 17:39:50.622920 containerd[1703]: time="2025-09-12T17:39:50.622666019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m64kc,Uid:51dd8025-2fac-4e9c-afd8-303555ac5609,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:50.659556 kubelet[3224]: I0912 17:39:50.659519 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b9db419a-6f78-4d57-9076-e5cf36b868ec-var-lib-calico\") pod \"tigera-operator-755d956888-6d2zn\" (UID: \"b9db419a-6f78-4d57-9076-e5cf36b868ec\") " pod="tigera-operator/tigera-operator-755d956888-6d2zn" Sep 12 17:39:50.659556 kubelet[3224]: I0912 17:39:50.659561 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67ksz\" (UniqueName: \"kubernetes.io/projected/b9db419a-6f78-4d57-9076-e5cf36b868ec-kube-api-access-67ksz\") pod \"tigera-operator-755d956888-6d2zn\" (UID: \"b9db419a-6f78-4d57-9076-e5cf36b868ec\") " pod="tigera-operator/tigera-operator-755d956888-6d2zn" Sep 12 17:39:50.666038 systemd[1]: Created slice kubepods-besteffort-podb9db419a_6f78_4d57_9076_e5cf36b868ec.slice - libcontainer container kubepods-besteffort-podb9db419a_6f78_4d57_9076_e5cf36b868ec.slice. Sep 12 17:39:50.678299 containerd[1703]: time="2025-09-12T17:39:50.677712986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:50.678299 containerd[1703]: time="2025-09-12T17:39:50.677786789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:50.678299 containerd[1703]: time="2025-09-12T17:39:50.677801290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:50.678299 containerd[1703]: time="2025-09-12T17:39:50.677898194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:50.699321 systemd[1]: run-containerd-runc-k8s.io-2dac9959b2b28f57e9f3fe30c0a832fdb8edd950b07bef72ceaba7cdc4966765-runc.pKFxGo.mount: Deactivated successfully. Sep 12 17:39:50.709914 systemd[1]: Started cri-containerd-2dac9959b2b28f57e9f3fe30c0a832fdb8edd950b07bef72ceaba7cdc4966765.scope - libcontainer container 2dac9959b2b28f57e9f3fe30c0a832fdb8edd950b07bef72ceaba7cdc4966765. Sep 12 17:39:50.732637 containerd[1703]: time="2025-09-12T17:39:50.732583047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m64kc,Uid:51dd8025-2fac-4e9c-afd8-303555ac5609,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dac9959b2b28f57e9f3fe30c0a832fdb8edd950b07bef72ceaba7cdc4966765\"" Sep 12 17:39:50.736418 containerd[1703]: time="2025-09-12T17:39:50.736377203Z" level=info msg="CreateContainer within sandbox \"2dac9959b2b28f57e9f3fe30c0a832fdb8edd950b07bef72ceaba7cdc4966765\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:39:50.769168 containerd[1703]: time="2025-09-12T17:39:50.768946844Z" level=info msg="CreateContainer within sandbox \"2dac9959b2b28f57e9f3fe30c0a832fdb8edd950b07bef72ceaba7cdc4966765\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b44f21b2a7c914f08ed476e4606ded5b57a894393e16a4ef1324c6f668beee6\"" Sep 12 17:39:50.770316 containerd[1703]: time="2025-09-12T17:39:50.770031689Z" level=info msg="StartContainer for \"8b44f21b2a7c914f08ed476e4606ded5b57a894393e16a4ef1324c6f668beee6\"" Sep 12 17:39:50.801917 systemd[1]: Started cri-containerd-8b44f21b2a7c914f08ed476e4606ded5b57a894393e16a4ef1324c6f668beee6.scope - libcontainer container 8b44f21b2a7c914f08ed476e4606ded5b57a894393e16a4ef1324c6f668beee6. Sep 12 17:39:50.835756 containerd[1703]: time="2025-09-12T17:39:50.835653292Z" level=info msg="StartContainer for \"8b44f21b2a7c914f08ed476e4606ded5b57a894393e16a4ef1324c6f668beee6\" returns successfully" Sep 12 17:39:50.972224 containerd[1703]: time="2025-09-12T17:39:50.972171215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6d2zn,Uid:b9db419a-6f78-4d57-9076-e5cf36b868ec,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:39:51.020582 containerd[1703]: time="2025-09-12T17:39:51.020456204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:51.020582 containerd[1703]: time="2025-09-12T17:39:51.020504806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:51.020582 containerd[1703]: time="2025-09-12T17:39:51.020523807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:51.020976 containerd[1703]: time="2025-09-12T17:39:51.020626711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:51.042947 systemd[1]: Started cri-containerd-113a2c4f74ba66f3f6fba04281652d2e15a1a6711f4cdd60514b8bacb43bf198.scope - libcontainer container 113a2c4f74ba66f3f6fba04281652d2e15a1a6711f4cdd60514b8bacb43bf198. Sep 12 17:39:51.084976 containerd[1703]: time="2025-09-12T17:39:51.084411538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6d2zn,Uid:b9db419a-6f78-4d57-9076-e5cf36b868ec,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"113a2c4f74ba66f3f6fba04281652d2e15a1a6711f4cdd60514b8bacb43bf198\"" Sep 12 17:39:51.086300 containerd[1703]: time="2025-09-12T17:39:51.086266715Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:39:52.790085 kubelet[3224]: I0912 17:39:52.789765 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m64kc" podStartSLOduration=2.789724879 podStartE2EDuration="2.789724879s" podCreationTimestamp="2025-09-12 17:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:50.990589774 +0000 UTC m=+5.172716629" watchObservedRunningTime="2025-09-12 17:39:52.789724879 +0000 UTC m=+6.971851734" Sep 12 17:39:52.804573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991150996.mount: Deactivated successfully. Sep 12 17:39:54.453102 containerd[1703]: time="2025-09-12T17:39:54.453053291Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:54.456015 containerd[1703]: time="2025-09-12T17:39:54.455846106Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:39:54.460120 containerd[1703]: time="2025-09-12T17:39:54.458981135Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:54.464290 containerd[1703]: time="2025-09-12T17:39:54.463454719Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:54.464290 containerd[1703]: time="2025-09-12T17:39:54.464149648Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.377841431s" Sep 12 17:39:54.464290 containerd[1703]: time="2025-09-12T17:39:54.464184749Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:39:54.466486 containerd[1703]: time="2025-09-12T17:39:54.466457343Z" level=info msg="CreateContainer within sandbox \"113a2c4f74ba66f3f6fba04281652d2e15a1a6711f4cdd60514b8bacb43bf198\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:39:54.503601 containerd[1703]: time="2025-09-12T17:39:54.503556571Z" level=info msg="CreateContainer within sandbox \"113a2c4f74ba66f3f6fba04281652d2e15a1a6711f4cdd60514b8bacb43bf198\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"da64e5e3b95dd760d27d3f097102eb340c00cb1026855ecc5a9334b9645f2076\"" Sep 12 17:39:54.504969 containerd[1703]: time="2025-09-12T17:39:54.504111794Z" level=info msg="StartContainer for \"da64e5e3b95dd760d27d3f097102eb340c00cb1026855ecc5a9334b9645f2076\"" Sep 12 17:39:54.540945 systemd[1]: Started cri-containerd-da64e5e3b95dd760d27d3f097102eb340c00cb1026855ecc5a9334b9645f2076.scope - libcontainer container da64e5e3b95dd760d27d3f097102eb340c00cb1026855ecc5a9334b9645f2076. Sep 12 17:39:54.568437 containerd[1703]: time="2025-09-12T17:39:54.568392608Z" level=info msg="StartContainer for \"da64e5e3b95dd760d27d3f097102eb340c00cb1026855ecc5a9334b9645f2076\" returns successfully" Sep 12 17:39:55.990878 kubelet[3224]: I0912 17:39:55.990818 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6d2zn" podStartSLOduration=2.611222753 podStartE2EDuration="5.990797256s" podCreationTimestamp="2025-09-12 17:39:50 +0000 UTC" firstStartedPulling="2025-09-12 17:39:51.085631088 +0000 UTC m=+5.267757943" lastFinishedPulling="2025-09-12 17:39:54.465205591 +0000 UTC m=+8.647332446" observedRunningTime="2025-09-12 17:39:54.990903753 +0000 UTC m=+9.173030608" watchObservedRunningTime="2025-09-12 17:39:55.990797256 +0000 UTC m=+10.172924011" Sep 12 17:40:00.953957 sudo[2342]: pam_unix(sudo:session): session closed for user root Sep 12 17:40:01.057006 sshd[2313]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:01.062929 systemd[1]: sshd@6-10.200.4.37:22-10.200.16.10:36728.service: Deactivated successfully. Sep 12 17:40:01.067770 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:40:01.068810 systemd[1]: session-9.scope: Consumed 4.523s CPU time, 157.5M memory peak, 0B memory swap peak. Sep 12 17:40:01.071282 systemd-logind[1680]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:40:01.073284 systemd-logind[1680]: Removed session 9. Sep 12 17:40:05.741001 systemd[1]: Created slice kubepods-besteffort-pod1f9a6bb1_c310_42c0_9605_28807f93d66f.slice - libcontainer container kubepods-besteffort-pod1f9a6bb1_c310_42c0_9605_28807f93d66f.slice. Sep 12 17:40:05.757964 kubelet[3224]: I0912 17:40:05.757628 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f9a6bb1-c310-42c0-9605-28807f93d66f-tigera-ca-bundle\") pod \"calico-typha-7fc767b4f9-7d9ks\" (UID: \"1f9a6bb1-c310-42c0-9605-28807f93d66f\") " pod="calico-system/calico-typha-7fc767b4f9-7d9ks" Sep 12 17:40:05.757964 kubelet[3224]: I0912 17:40:05.757766 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b48z6\" (UniqueName: \"kubernetes.io/projected/1f9a6bb1-c310-42c0-9605-28807f93d66f-kube-api-access-b48z6\") pod \"calico-typha-7fc767b4f9-7d9ks\" (UID: \"1f9a6bb1-c310-42c0-9605-28807f93d66f\") " pod="calico-system/calico-typha-7fc767b4f9-7d9ks" Sep 12 17:40:05.757964 kubelet[3224]: I0912 17:40:05.757798 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f9a6bb1-c310-42c0-9605-28807f93d66f-typha-certs\") pod \"calico-typha-7fc767b4f9-7d9ks\" (UID: \"1f9a6bb1-c310-42c0-9605-28807f93d66f\") " pod="calico-system/calico-typha-7fc767b4f9-7d9ks" Sep 12 17:40:06.048401 containerd[1703]: time="2025-09-12T17:40:06.048271245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fc767b4f9-7d9ks,Uid:1f9a6bb1-c310-42c0-9605-28807f93d66f,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:06.095900 containerd[1703]: time="2025-09-12T17:40:06.095236672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:06.096323 containerd[1703]: time="2025-09-12T17:40:06.095936014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:06.096323 containerd[1703]: time="2025-09-12T17:40:06.096014718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:06.096323 containerd[1703]: time="2025-09-12T17:40:06.096209930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:06.129099 systemd[1]: Started cri-containerd-082cec99b3af3e9700b143cd7c4536ef873cf308214142416b43879f67f903cd.scope - libcontainer container 082cec99b3af3e9700b143cd7c4536ef873cf308214142416b43879f67f903cd. Sep 12 17:40:06.182297 systemd[1]: Created slice kubepods-besteffort-pod44885330_3ea1_4f53_99fd_419bb3e8f617.slice - libcontainer container kubepods-besteffort-pod44885330_3ea1_4f53_99fd_419bb3e8f617.slice. Sep 12 17:40:06.217005 containerd[1703]: time="2025-09-12T17:40:06.216910293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fc767b4f9-7d9ks,Uid:1f9a6bb1-c310-42c0-9605-28807f93d66f,Namespace:calico-system,Attempt:0,} returns sandbox id \"082cec99b3af3e9700b143cd7c4536ef873cf308214142416b43879f67f903cd\"" Sep 12 17:40:06.219712 containerd[1703]: time="2025-09-12T17:40:06.219682860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:40:06.260698 kubelet[3224]: I0912 17:40:06.260435 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jx8q\" (UniqueName: \"kubernetes.io/projected/44885330-3ea1-4f53-99fd-419bb3e8f617-kube-api-access-5jx8q\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.260698 kubelet[3224]: I0912 17:40:06.260469 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-cni-log-dir\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.260698 kubelet[3224]: I0912 17:40:06.260485 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44885330-3ea1-4f53-99fd-419bb3e8f617-tigera-ca-bundle\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.260698 kubelet[3224]: I0912 17:40:06.260507 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-lib-modules\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.260698 kubelet[3224]: I0912 17:40:06.260521 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-xtables-lock\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261000 kubelet[3224]: I0912 17:40:06.260535 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-flexvol-driver-host\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261000 kubelet[3224]: I0912 17:40:06.260547 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-var-lib-calico\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261000 kubelet[3224]: I0912 17:40:06.260560 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-var-run-calico\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261000 kubelet[3224]: I0912 17:40:06.260575 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-cni-bin-dir\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261000 kubelet[3224]: I0912 17:40:06.260590 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/44885330-3ea1-4f53-99fd-419bb3e8f617-node-certs\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261144 kubelet[3224]: I0912 17:40:06.260604 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-policysync\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.261144 kubelet[3224]: I0912 17:40:06.260620 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/44885330-3ea1-4f53-99fd-419bb3e8f617-cni-net-dir\") pod \"calico-node-f66xw\" (UID: \"44885330-3ea1-4f53-99fd-419bb3e8f617\") " pod="calico-system/calico-node-f66xw" Sep 12 17:40:06.364942 kubelet[3224]: E0912 17:40:06.364850 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.366321 kubelet[3224]: W0912 17:40:06.366028 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.366321 kubelet[3224]: E0912 17:40:06.366093 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.366644 kubelet[3224]: E0912 17:40:06.366526 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.366644 kubelet[3224]: W0912 17:40:06.366540 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.366644 kubelet[3224]: E0912 17:40:06.366559 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.366979 kubelet[3224]: E0912 17:40:06.366776 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.366979 kubelet[3224]: W0912 17:40:06.366788 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.366979 kubelet[3224]: E0912 17:40:06.366801 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.367323 kubelet[3224]: E0912 17:40:06.367229 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.367323 kubelet[3224]: W0912 17:40:06.367242 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.367557 kubelet[3224]: E0912 17:40:06.367431 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.368965 kubelet[3224]: E0912 17:40:06.368950 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.369192 kubelet[3224]: W0912 17:40:06.369052 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.369192 kubelet[3224]: E0912 17:40:06.369073 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.369549 kubelet[3224]: E0912 17:40:06.369454 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.369549 kubelet[3224]: W0912 17:40:06.369468 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.369549 kubelet[3224]: E0912 17:40:06.369495 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.370299 kubelet[3224]: E0912 17:40:06.370283 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.370430 kubelet[3224]: W0912 17:40:06.370361 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.370430 kubelet[3224]: E0912 17:40:06.370379 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.372719 kubelet[3224]: E0912 17:40:06.372699 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.372719 kubelet[3224]: W0912 17:40:06.372714 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.372848 kubelet[3224]: E0912 17:40:06.372729 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.432777 kubelet[3224]: E0912 17:40:06.432679 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:06.451556 kubelet[3224]: E0912 17:40:06.451346 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.451556 kubelet[3224]: W0912 17:40:06.451374 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.451556 kubelet[3224]: E0912 17:40:06.451397 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.451556 kubelet[3224]: E0912 17:40:06.451621 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.451556 kubelet[3224]: W0912 17:40:06.451631 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.451556 kubelet[3224]: E0912 17:40:06.451654 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.452189 kubelet[3224]: E0912 17:40:06.452173 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.452771 kubelet[3224]: W0912 17:40:06.452312 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.452771 kubelet[3224]: E0912 17:40:06.452338 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.453321 kubelet[3224]: E0912 17:40:06.453304 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.453509 kubelet[3224]: W0912 17:40:06.453459 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.454353 kubelet[3224]: E0912 17:40:06.453614 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.455158 kubelet[3224]: E0912 17:40:06.455038 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.455158 kubelet[3224]: W0912 17:40:06.455054 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.455158 kubelet[3224]: E0912 17:40:06.455072 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.457053 kubelet[3224]: E0912 17:40:06.456646 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.457053 kubelet[3224]: W0912 17:40:06.456664 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.457053 kubelet[3224]: E0912 17:40:06.456682 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.457560 kubelet[3224]: E0912 17:40:06.457386 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.457560 kubelet[3224]: W0912 17:40:06.457400 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.457560 kubelet[3224]: E0912 17:40:06.457414 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.457892 kubelet[3224]: E0912 17:40:06.457755 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.457892 kubelet[3224]: W0912 17:40:06.457767 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.458338 kubelet[3224]: E0912 17:40:06.457780 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.459187 kubelet[3224]: E0912 17:40:06.459082 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.459287 kubelet[3224]: W0912 17:40:06.459274 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.459365 kubelet[3224]: E0912 17:40:06.459345 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.460211 kubelet[3224]: E0912 17:40:06.460086 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.460211 kubelet[3224]: W0912 17:40:06.460099 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.460211 kubelet[3224]: E0912 17:40:06.460113 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.463036 kubelet[3224]: E0912 17:40:06.462704 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.463036 kubelet[3224]: W0912 17:40:06.462727 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.463036 kubelet[3224]: E0912 17:40:06.462760 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.463036 kubelet[3224]: E0912 17:40:06.462980 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.463036 kubelet[3224]: W0912 17:40:06.462992 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.463036 kubelet[3224]: E0912 17:40:06.463005 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.463946 kubelet[3224]: E0912 17:40:06.463637 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.463946 kubelet[3224]: W0912 17:40:06.463651 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.463946 kubelet[3224]: E0912 17:40:06.463664 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.463946 kubelet[3224]: E0912 17:40:06.463913 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.463946 kubelet[3224]: W0912 17:40:06.463924 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.463946 kubelet[3224]: E0912 17:40:06.463937 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.465723 kubelet[3224]: E0912 17:40:06.464710 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.465723 kubelet[3224]: W0912 17:40:06.464725 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.465723 kubelet[3224]: E0912 17:40:06.464763 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.465723 kubelet[3224]: E0912 17:40:06.464966 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.465723 kubelet[3224]: W0912 17:40:06.464976 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.465723 kubelet[3224]: E0912 17:40:06.464989 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.465723 kubelet[3224]: E0912 17:40:06.465405 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.465723 kubelet[3224]: W0912 17:40:06.465417 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.465723 kubelet[3224]: E0912 17:40:06.465430 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.466227 kubelet[3224]: E0912 17:40:06.465906 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.466227 kubelet[3224]: W0912 17:40:06.465918 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.466227 kubelet[3224]: E0912 17:40:06.465931 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.468252 kubelet[3224]: E0912 17:40:06.467890 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.468252 kubelet[3224]: W0912 17:40:06.467913 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.468252 kubelet[3224]: E0912 17:40:06.467928 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.468407 kubelet[3224]: E0912 17:40:06.468289 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.468407 kubelet[3224]: W0912 17:40:06.468300 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.468407 kubelet[3224]: E0912 17:40:06.468316 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.468795 kubelet[3224]: E0912 17:40:06.468777 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.468795 kubelet[3224]: W0912 17:40:06.468794 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.468918 kubelet[3224]: E0912 17:40:06.468808 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.468918 kubelet[3224]: I0912 17:40:06.468839 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/094c632a-fdc9-41e5-8058-f2de8effbf33-kubelet-dir\") pod \"csi-node-driver-k4nvw\" (UID: \"094c632a-fdc9-41e5-8058-f2de8effbf33\") " pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:06.469129 kubelet[3224]: E0912 17:40:06.469068 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.469129 kubelet[3224]: W0912 17:40:06.469081 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.469129 kubelet[3224]: E0912 17:40:06.469094 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.469129 kubelet[3224]: I0912 17:40:06.469118 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/094c632a-fdc9-41e5-8058-f2de8effbf33-registration-dir\") pod \"csi-node-driver-k4nvw\" (UID: \"094c632a-fdc9-41e5-8058-f2de8effbf33\") " pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:06.470025 kubelet[3224]: E0912 17:40:06.469309 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.470025 kubelet[3224]: W0912 17:40:06.469322 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.470025 kubelet[3224]: E0912 17:40:06.469335 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.470025 kubelet[3224]: I0912 17:40:06.469358 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zxn\" (UniqueName: \"kubernetes.io/projected/094c632a-fdc9-41e5-8058-f2de8effbf33-kube-api-access-t9zxn\") pod \"csi-node-driver-k4nvw\" (UID: \"094c632a-fdc9-41e5-8058-f2de8effbf33\") " pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:06.470025 kubelet[3224]: E0912 17:40:06.469901 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.470025 kubelet[3224]: W0912 17:40:06.469914 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.470025 kubelet[3224]: E0912 17:40:06.469935 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.471045 kubelet[3224]: E0912 17:40:06.470911 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.471045 kubelet[3224]: W0912 17:40:06.470927 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.471045 kubelet[3224]: E0912 17:40:06.470947 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.471281 kubelet[3224]: E0912 17:40:06.471269 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.472958 kubelet[3224]: W0912 17:40:06.472772 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.472958 kubelet[3224]: E0912 17:40:06.472807 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.472958 kubelet[3224]: I0912 17:40:06.472832 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/094c632a-fdc9-41e5-8058-f2de8effbf33-varrun\") pod \"csi-node-driver-k4nvw\" (UID: \"094c632a-fdc9-41e5-8058-f2de8effbf33\") " pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:06.473281 kubelet[3224]: E0912 17:40:06.473204 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.473281 kubelet[3224]: W0912 17:40:06.473220 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.473281 kubelet[3224]: E0912 17:40:06.473250 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.473424 kubelet[3224]: I0912 17:40:06.473283 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/094c632a-fdc9-41e5-8058-f2de8effbf33-socket-dir\") pod \"csi-node-driver-k4nvw\" (UID: \"094c632a-fdc9-41e5-8058-f2de8effbf33\") " pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:06.473817 kubelet[3224]: E0912 17:40:06.473669 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.473817 kubelet[3224]: W0912 17:40:06.473684 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.473817 kubelet[3224]: E0912 17:40:06.473713 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.474150 kubelet[3224]: E0912 17:40:06.473924 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.474150 kubelet[3224]: W0912 17:40:06.473935 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.474150 kubelet[3224]: E0912 17:40:06.473968 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.474593 kubelet[3224]: E0912 17:40:06.474446 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.474593 kubelet[3224]: W0912 17:40:06.474462 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.474593 kubelet[3224]: E0912 17:40:06.474483 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.475016 kubelet[3224]: E0912 17:40:06.475003 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.475161 kubelet[3224]: W0912 17:40:06.475091 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.475161 kubelet[3224]: E0912 17:40:06.475110 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.475540 kubelet[3224]: E0912 17:40:06.475440 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.475540 kubelet[3224]: W0912 17:40:06.475454 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.475540 kubelet[3224]: E0912 17:40:06.475468 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.476205 kubelet[3224]: E0912 17:40:06.476009 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.476205 kubelet[3224]: W0912 17:40:06.476023 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.476205 kubelet[3224]: E0912 17:40:06.476037 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.476582 kubelet[3224]: E0912 17:40:06.476463 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.476582 kubelet[3224]: W0912 17:40:06.476478 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.476582 kubelet[3224]: E0912 17:40:06.476491 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.476942 kubelet[3224]: E0912 17:40:06.476892 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.476942 kubelet[3224]: W0912 17:40:06.476907 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.476942 kubelet[3224]: E0912 17:40:06.476920 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.486419 containerd[1703]: time="2025-09-12T17:40:06.486384308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f66xw,Uid:44885330-3ea1-4f53-99fd-419bb3e8f617,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:06.530021 containerd[1703]: time="2025-09-12T17:40:06.529815322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:06.530021 containerd[1703]: time="2025-09-12T17:40:06.529875525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:06.530021 containerd[1703]: time="2025-09-12T17:40:06.529892026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:06.530021 containerd[1703]: time="2025-09-12T17:40:06.529973731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:06.549923 systemd[1]: Started cri-containerd-b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863.scope - libcontainer container b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863. Sep 12 17:40:06.574618 kubelet[3224]: E0912 17:40:06.574409 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.574618 kubelet[3224]: W0912 17:40:06.574430 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.574618 kubelet[3224]: E0912 17:40:06.574457 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.578082 kubelet[3224]: E0912 17:40:06.577975 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.578082 kubelet[3224]: W0912 17:40:06.577994 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.578082 kubelet[3224]: E0912 17:40:06.578015 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.579107 kubelet[3224]: E0912 17:40:06.578817 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.579107 kubelet[3224]: W0912 17:40:06.578834 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.579107 kubelet[3224]: E0912 17:40:06.578879 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.579521 kubelet[3224]: E0912 17:40:06.579323 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.579758 kubelet[3224]: W0912 17:40:06.579684 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.579758 kubelet[3224]: E0912 17:40:06.579725 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.580526 kubelet[3224]: E0912 17:40:06.580289 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.580526 kubelet[3224]: W0912 17:40:06.580311 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.580526 kubelet[3224]: E0912 17:40:06.580340 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.581138 kubelet[3224]: E0912 17:40:06.581045 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.581138 kubelet[3224]: W0912 17:40:06.581060 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.581138 kubelet[3224]: E0912 17:40:06.581104 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.581670 kubelet[3224]: E0912 17:40:06.581575 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.581670 kubelet[3224]: W0912 17:40:06.581607 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.581670 kubelet[3224]: E0912 17:40:06.581633 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.582818 kubelet[3224]: E0912 17:40:06.582600 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.582818 kubelet[3224]: W0912 17:40:06.582617 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.582818 kubelet[3224]: E0912 17:40:06.582648 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.583423 kubelet[3224]: E0912 17:40:06.583199 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.583423 kubelet[3224]: W0912 17:40:06.583213 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.583423 kubelet[3224]: E0912 17:40:06.583242 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.584337 kubelet[3224]: E0912 17:40:06.584059 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.584337 kubelet[3224]: W0912 17:40:06.584074 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.584337 kubelet[3224]: E0912 17:40:06.584105 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.585149 kubelet[3224]: E0912 17:40:06.584924 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.585149 kubelet[3224]: W0912 17:40:06.584939 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.585149 kubelet[3224]: E0912 17:40:06.584984 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.586069 kubelet[3224]: E0912 17:40:06.585921 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.586069 kubelet[3224]: W0912 17:40:06.585936 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.586465 kubelet[3224]: E0912 17:40:06.586342 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.588329 kubelet[3224]: E0912 17:40:06.588156 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.588329 kubelet[3224]: W0912 17:40:06.588171 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.589627 kubelet[3224]: E0912 17:40:06.588700 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.589627 kubelet[3224]: W0912 17:40:06.588717 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.590233 kubelet[3224]: E0912 17:40:06.590005 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.590233 kubelet[3224]: E0912 17:40:06.590028 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.590233 kubelet[3224]: E0912 17:40:06.590105 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.590233 kubelet[3224]: W0912 17:40:06.590114 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.590233 kubelet[3224]: E0912 17:40:06.590146 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.591440 kubelet[3224]: E0912 17:40:06.590937 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.591440 kubelet[3224]: W0912 17:40:06.590952 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.591440 kubelet[3224]: E0912 17:40:06.591016 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.592037 kubelet[3224]: E0912 17:40:06.591928 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.592037 kubelet[3224]: W0912 17:40:06.591945 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.592868 kubelet[3224]: E0912 17:40:06.592329 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.593169 kubelet[3224]: E0912 17:40:06.592993 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.593169 kubelet[3224]: W0912 17:40:06.593007 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.593169 kubelet[3224]: E0912 17:40:06.593043 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.593885 kubelet[3224]: E0912 17:40:06.593813 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.593885 kubelet[3224]: W0912 17:40:06.593829 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.594282 kubelet[3224]: E0912 17:40:06.594194 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.594661 kubelet[3224]: E0912 17:40:06.594619 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.594661 kubelet[3224]: W0912 17:40:06.594632 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.595966 kubelet[3224]: E0912 17:40:06.595891 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.596127 kubelet[3224]: E0912 17:40:06.596111 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.596300 kubelet[3224]: W0912 17:40:06.596205 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.596300 kubelet[3224]: E0912 17:40:06.596242 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.596682 kubelet[3224]: E0912 17:40:06.596599 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.596682 kubelet[3224]: W0912 17:40:06.596614 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.596825 kubelet[3224]: E0912 17:40:06.596792 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.597763 kubelet[3224]: E0912 17:40:06.597575 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.597763 kubelet[3224]: W0912 17:40:06.597590 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.597763 kubelet[3224]: E0912 17:40:06.597615 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.598684 kubelet[3224]: E0912 17:40:06.598585 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.598684 kubelet[3224]: W0912 17:40:06.598602 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.598684 kubelet[3224]: E0912 17:40:06.598629 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.599299 kubelet[3224]: E0912 17:40:06.599089 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.599299 kubelet[3224]: W0912 17:40:06.599103 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.599299 kubelet[3224]: E0912 17:40:06.599117 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.617942 containerd[1703]: time="2025-09-12T17:40:06.617512299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f66xw,Uid:44885330-3ea1-4f53-99fd-419bb3e8f617,Namespace:calico-system,Attempt:0,} returns sandbox id \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\"" Sep 12 17:40:06.633875 kubelet[3224]: E0912 17:40:06.633788 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.633875 kubelet[3224]: W0912 17:40:06.633811 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.633875 kubelet[3224]: E0912 17:40:06.633834 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.875921 systemd[1]: run-containerd-runc-k8s.io-082cec99b3af3e9700b143cd7c4536ef873cf308214142416b43879f67f903cd-runc.5fvSq6.mount: Deactivated successfully. Sep 12 17:40:07.936771 kubelet[3224]: E0912 17:40:07.935662 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:08.406664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086374921.mount: Deactivated successfully. Sep 12 17:40:09.248778 containerd[1703]: time="2025-09-12T17:40:09.248687727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:09.252905 containerd[1703]: time="2025-09-12T17:40:09.252849777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:40:09.255833 containerd[1703]: time="2025-09-12T17:40:09.255800455Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:09.260012 containerd[1703]: time="2025-09-12T17:40:09.259982306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:09.260564 containerd[1703]: time="2025-09-12T17:40:09.260529639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.040813077s" Sep 12 17:40:09.260642 containerd[1703]: time="2025-09-12T17:40:09.260570342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:40:09.262331 containerd[1703]: time="2025-09-12T17:40:09.262306146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:40:09.280089 containerd[1703]: time="2025-09-12T17:40:09.279976509Z" level=info msg="CreateContainer within sandbox \"082cec99b3af3e9700b143cd7c4536ef873cf308214142416b43879f67f903cd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:40:09.310027 containerd[1703]: time="2025-09-12T17:40:09.309976515Z" level=info msg="CreateContainer within sandbox \"082cec99b3af3e9700b143cd7c4536ef873cf308214142416b43879f67f903cd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c691147374215161802b82bcbd5e1c2f6dcb1673116b91ee3c4f9a3d21da1c4\"" Sep 12 17:40:09.310651 containerd[1703]: time="2025-09-12T17:40:09.310512547Z" level=info msg="StartContainer for \"1c691147374215161802b82bcbd5e1c2f6dcb1673116b91ee3c4f9a3d21da1c4\"" Sep 12 17:40:09.341931 systemd[1]: Started cri-containerd-1c691147374215161802b82bcbd5e1c2f6dcb1673116b91ee3c4f9a3d21da1c4.scope - libcontainer container 1c691147374215161802b82bcbd5e1c2f6dcb1673116b91ee3c4f9a3d21da1c4. Sep 12 17:40:09.399075 containerd[1703]: time="2025-09-12T17:40:09.398973070Z" level=info msg="StartContainer for \"1c691147374215161802b82bcbd5e1c2f6dcb1673116b91ee3c4f9a3d21da1c4\" returns successfully" Sep 12 17:40:09.936160 kubelet[3224]: E0912 17:40:09.935389 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:10.093973 kubelet[3224]: E0912 17:40:10.093934 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.094316 kubelet[3224]: W0912 17:40:10.094270 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.094428 kubelet[3224]: E0912 17:40:10.094313 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.094631 kubelet[3224]: E0912 17:40:10.094610 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.094631 kubelet[3224]: W0912 17:40:10.094626 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.094929 kubelet[3224]: E0912 17:40:10.094643 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.095215 kubelet[3224]: E0912 17:40:10.095195 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.095215 kubelet[3224]: W0912 17:40:10.095210 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.095373 kubelet[3224]: E0912 17:40:10.095225 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.095930 kubelet[3224]: E0912 17:40:10.095906 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.095930 kubelet[3224]: W0912 17:40:10.095920 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.096135 kubelet[3224]: E0912 17:40:10.095935 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.096724 kubelet[3224]: E0912 17:40:10.096706 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.096724 kubelet[3224]: W0912 17:40:10.096723 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.097284 kubelet[3224]: E0912 17:40:10.096752 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.097284 kubelet[3224]: E0912 17:40:10.096959 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.097284 kubelet[3224]: W0912 17:40:10.096973 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.097284 kubelet[3224]: E0912 17:40:10.096987 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.097284 kubelet[3224]: E0912 17:40:10.097253 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.097527 kubelet[3224]: W0912 17:40:10.097286 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.097527 kubelet[3224]: E0912 17:40:10.097302 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.097527 kubelet[3224]: E0912 17:40:10.097501 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.097527 kubelet[3224]: W0912 17:40:10.097512 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.097527 kubelet[3224]: E0912 17:40:10.097524 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.097866 kubelet[3224]: E0912 17:40:10.097808 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.097866 kubelet[3224]: W0912 17:40:10.097821 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.097866 kubelet[3224]: E0912 17:40:10.097835 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098049 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.099333 kubelet[3224]: W0912 17:40:10.098061 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098074 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098287 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.099333 kubelet[3224]: W0912 17:40:10.098301 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098314 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098502 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.099333 kubelet[3224]: W0912 17:40:10.098513 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098525 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.099333 kubelet[3224]: E0912 17:40:10.098725 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.099627 kubelet[3224]: W0912 17:40:10.098754 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.099627 kubelet[3224]: E0912 17:40:10.098769 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.099627 kubelet[3224]: E0912 17:40:10.099012 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.099627 kubelet[3224]: W0912 17:40:10.099026 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.099627 kubelet[3224]: E0912 17:40:10.099039 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.099627 kubelet[3224]: E0912 17:40:10.099278 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.099627 kubelet[3224]: W0912 17:40:10.099291 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.099627 kubelet[3224]: E0912 17:40:10.099305 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.113177 kubelet[3224]: E0912 17:40:10.113156 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.113177 kubelet[3224]: W0912 17:40:10.113172 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.113344 kubelet[3224]: E0912 17:40:10.113189 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.113515 kubelet[3224]: E0912 17:40:10.113496 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.113515 kubelet[3224]: W0912 17:40:10.113510 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.113727 kubelet[3224]: E0912 17:40:10.113530 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.113811 kubelet[3224]: E0912 17:40:10.113766 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.113811 kubelet[3224]: W0912 17:40:10.113778 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.113811 kubelet[3224]: E0912 17:40:10.113798 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.114048 kubelet[3224]: E0912 17:40:10.114030 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.114048 kubelet[3224]: W0912 17:40:10.114044 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.114259 kubelet[3224]: E0912 17:40:10.114064 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.114320 kubelet[3224]: E0912 17:40:10.114272 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.114320 kubelet[3224]: W0912 17:40:10.114283 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.114320 kubelet[3224]: E0912 17:40:10.114301 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.114541 kubelet[3224]: E0912 17:40:10.114524 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.114541 kubelet[3224]: W0912 17:40:10.114539 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.114658 kubelet[3224]: E0912 17:40:10.114636 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.114820 kubelet[3224]: E0912 17:40:10.114803 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.114820 kubelet[3224]: W0912 17:40:10.114817 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.115079 kubelet[3224]: E0912 17:40:10.114877 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.115079 kubelet[3224]: E0912 17:40:10.115023 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.115079 kubelet[3224]: W0912 17:40:10.115034 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.115292 kubelet[3224]: E0912 17:40:10.115119 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.115292 kubelet[3224]: E0912 17:40:10.115276 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.115292 kubelet[3224]: W0912 17:40:10.115286 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.115538 kubelet[3224]: E0912 17:40:10.115305 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.115705 kubelet[3224]: E0912 17:40:10.115689 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.115705 kubelet[3224]: W0912 17:40:10.115702 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.115857 kubelet[3224]: E0912 17:40:10.115722 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.115961 kubelet[3224]: E0912 17:40:10.115947 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.115961 kubelet[3224]: W0912 17:40:10.115959 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.116059 kubelet[3224]: E0912 17:40:10.115987 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.116237 kubelet[3224]: E0912 17:40:10.116219 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.116237 kubelet[3224]: W0912 17:40:10.116233 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.116345 kubelet[3224]: E0912 17:40:10.116253 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.116802 kubelet[3224]: E0912 17:40:10.116781 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.116802 kubelet[3224]: W0912 17:40:10.116796 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.116944 kubelet[3224]: E0912 17:40:10.116855 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.117052 kubelet[3224]: E0912 17:40:10.117038 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.117052 kubelet[3224]: W0912 17:40:10.117050 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.117163 kubelet[3224]: E0912 17:40:10.117142 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.117306 kubelet[3224]: E0912 17:40:10.117289 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.117306 kubelet[3224]: W0912 17:40:10.117302 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.117447 kubelet[3224]: E0912 17:40:10.117323 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.117564 kubelet[3224]: E0912 17:40:10.117548 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.117564 kubelet[3224]: W0912 17:40:10.117561 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.117654 kubelet[3224]: E0912 17:40:10.117575 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.117878 kubelet[3224]: E0912 17:40:10.117859 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.117878 kubelet[3224]: W0912 17:40:10.117873 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.118004 kubelet[3224]: E0912 17:40:10.117891 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:10.118278 kubelet[3224]: E0912 17:40:10.118261 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:10.118278 kubelet[3224]: W0912 17:40:10.118274 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:10.118371 kubelet[3224]: E0912 17:40:10.118287 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.020961 kubelet[3224]: I0912 17:40:11.020924 3224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:11.106345 kubelet[3224]: E0912 17:40:11.106307 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.106345 kubelet[3224]: W0912 17:40:11.106335 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.106603 kubelet[3224]: E0912 17:40:11.106365 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.106654 kubelet[3224]: E0912 17:40:11.106619 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.106654 kubelet[3224]: W0912 17:40:11.106633 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.106654 kubelet[3224]: E0912 17:40:11.106650 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.106920 kubelet[3224]: E0912 17:40:11.106899 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.106920 kubelet[3224]: W0912 17:40:11.106916 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.107081 kubelet[3224]: E0912 17:40:11.106933 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.107176 kubelet[3224]: E0912 17:40:11.107158 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.107176 kubelet[3224]: W0912 17:40:11.107173 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.107284 kubelet[3224]: E0912 17:40:11.107187 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.107408 kubelet[3224]: E0912 17:40:11.107394 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.107408 kubelet[3224]: W0912 17:40:11.107406 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.107527 kubelet[3224]: E0912 17:40:11.107419 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.107622 kubelet[3224]: E0912 17:40:11.107606 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.107622 kubelet[3224]: W0912 17:40:11.107619 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.107767 kubelet[3224]: E0912 17:40:11.107631 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.107867 kubelet[3224]: E0912 17:40:11.107850 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.107867 kubelet[3224]: W0912 17:40:11.107865 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.107962 kubelet[3224]: E0912 17:40:11.107878 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.108090 kubelet[3224]: E0912 17:40:11.108075 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.108090 kubelet[3224]: W0912 17:40:11.108087 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.108217 kubelet[3224]: E0912 17:40:11.108100 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.108316 kubelet[3224]: E0912 17:40:11.108299 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.108316 kubelet[3224]: W0912 17:40:11.108312 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.108441 kubelet[3224]: E0912 17:40:11.108324 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.108529 kubelet[3224]: E0912 17:40:11.108512 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.108529 kubelet[3224]: W0912 17:40:11.108528 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.108631 kubelet[3224]: E0912 17:40:11.108541 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.108759 kubelet[3224]: E0912 17:40:11.108730 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.108832 kubelet[3224]: W0912 17:40:11.108761 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.108832 kubelet[3224]: E0912 17:40:11.108776 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.108987 kubelet[3224]: E0912 17:40:11.108971 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.108987 kubelet[3224]: W0912 17:40:11.108983 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.109094 kubelet[3224]: E0912 17:40:11.108996 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.109200 kubelet[3224]: E0912 17:40:11.109186 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.109200 kubelet[3224]: W0912 17:40:11.109198 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.109299 kubelet[3224]: E0912 17:40:11.109212 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.109425 kubelet[3224]: E0912 17:40:11.109410 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.109425 kubelet[3224]: W0912 17:40:11.109423 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.109542 kubelet[3224]: E0912 17:40:11.109436 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.109639 kubelet[3224]: E0912 17:40:11.109621 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.109639 kubelet[3224]: W0912 17:40:11.109635 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.109725 kubelet[3224]: E0912 17:40:11.109647 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.120026 kubelet[3224]: E0912 17:40:11.120005 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.120026 kubelet[3224]: W0912 17:40:11.120021 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.120189 kubelet[3224]: E0912 17:40:11.120037 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.120393 kubelet[3224]: E0912 17:40:11.120373 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.120393 kubelet[3224]: W0912 17:40:11.120389 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.120528 kubelet[3224]: E0912 17:40:11.120408 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.120761 kubelet[3224]: E0912 17:40:11.120727 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.120761 kubelet[3224]: W0912 17:40:11.120751 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.120874 kubelet[3224]: E0912 17:40:11.120778 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.121055 kubelet[3224]: E0912 17:40:11.121037 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.121055 kubelet[3224]: W0912 17:40:11.121051 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.121175 kubelet[3224]: E0912 17:40:11.121071 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.121321 kubelet[3224]: E0912 17:40:11.121304 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.121321 kubelet[3224]: W0912 17:40:11.121318 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.121486 kubelet[3224]: E0912 17:40:11.121337 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.121559 kubelet[3224]: E0912 17:40:11.121528 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.121559 kubelet[3224]: W0912 17:40:11.121540 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.121721 kubelet[3224]: E0912 17:40:11.121597 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.121721 kubelet[3224]: E0912 17:40:11.121719 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.121899 kubelet[3224]: W0912 17:40:11.121729 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.121899 kubelet[3224]: E0912 17:40:11.121786 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.122080 kubelet[3224]: E0912 17:40:11.121942 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.122080 kubelet[3224]: W0912 17:40:11.121953 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.122080 kubelet[3224]: E0912 17:40:11.121981 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.122228 kubelet[3224]: E0912 17:40:11.122177 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.122228 kubelet[3224]: W0912 17:40:11.122188 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.122228 kubelet[3224]: E0912 17:40:11.122205 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.122561 kubelet[3224]: E0912 17:40:11.122542 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.122561 kubelet[3224]: W0912 17:40:11.122557 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.122700 kubelet[3224]: E0912 17:40:11.122576 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.122840 kubelet[3224]: E0912 17:40:11.122825 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.122907 kubelet[3224]: W0912 17:40:11.122840 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.122907 kubelet[3224]: E0912 17:40:11.122867 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.123121 kubelet[3224]: E0912 17:40:11.123104 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.123121 kubelet[3224]: W0912 17:40:11.123117 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.123245 kubelet[3224]: E0912 17:40:11.123138 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.123934 kubelet[3224]: E0912 17:40:11.123663 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.123934 kubelet[3224]: W0912 17:40:11.123687 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.123934 kubelet[3224]: E0912 17:40:11.123708 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.124152 kubelet[3224]: E0912 17:40:11.124135 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.124152 kubelet[3224]: W0912 17:40:11.124149 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.124261 kubelet[3224]: E0912 17:40:11.124177 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.124435 kubelet[3224]: E0912 17:40:11.124417 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.124435 kubelet[3224]: W0912 17:40:11.124431 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.124536 kubelet[3224]: E0912 17:40:11.124459 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.124874 kubelet[3224]: E0912 17:40:11.124856 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.124874 kubelet[3224]: W0912 17:40:11.124870 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.125015 kubelet[3224]: E0912 17:40:11.124889 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.125119 kubelet[3224]: E0912 17:40:11.125104 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.125119 kubelet[3224]: W0912 17:40:11.125117 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.125217 kubelet[3224]: E0912 17:40:11.125143 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.125371 kubelet[3224]: E0912 17:40:11.125352 3224 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:11.125371 kubelet[3224]: W0912 17:40:11.125367 3224 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:11.125459 kubelet[3224]: E0912 17:40:11.125381 3224 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:11.436324 containerd[1703]: time="2025-09-12T17:40:11.436265287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:11.438681 containerd[1703]: time="2025-09-12T17:40:11.438571825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:40:11.442657 containerd[1703]: time="2025-09-12T17:40:11.441491299Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:11.445989 containerd[1703]: time="2025-09-12T17:40:11.445956066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:11.447842 containerd[1703]: time="2025-09-12T17:40:11.447769575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.185202413s" Sep 12 17:40:11.448186 containerd[1703]: time="2025-09-12T17:40:11.448093494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:40:11.453649 containerd[1703]: time="2025-09-12T17:40:11.453616124Z" level=info msg="CreateContainer within sandbox \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:40:11.486593 containerd[1703]: time="2025-09-12T17:40:11.486508090Z" level=info msg="CreateContainer within sandbox \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6\"" Sep 12 17:40:11.487479 containerd[1703]: time="2025-09-12T17:40:11.487441145Z" level=info msg="StartContainer for \"e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6\"" Sep 12 17:40:11.535950 systemd[1]: Started cri-containerd-e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6.scope - libcontainer container e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6. Sep 12 17:40:11.573403 containerd[1703]: time="2025-09-12T17:40:11.573337478Z" level=info msg="StartContainer for \"e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6\" returns successfully" Sep 12 17:40:11.585957 systemd[1]: cri-containerd-e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6.scope: Deactivated successfully. Sep 12 17:40:11.615394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6-rootfs.mount: Deactivated successfully. Sep 12 17:40:12.308716 kubelet[3224]: E0912 17:40:11.931929 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:12.308716 kubelet[3224]: I0912 17:40:12.041951 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fc767b4f9-7d9ks" podStartSLOduration=3.99921989 podStartE2EDuration="7.041927981s" podCreationTimestamp="2025-09-12 17:40:05 +0000 UTC" firstStartedPulling="2025-09-12 17:40:06.218889212 +0000 UTC m=+20.401015967" lastFinishedPulling="2025-09-12 17:40:09.261597203 +0000 UTC m=+23.443724058" observedRunningTime="2025-09-12 17:40:10.03347055 +0000 UTC m=+24.215597405" watchObservedRunningTime="2025-09-12 17:40:12.041927981 +0000 UTC m=+26.224054836" Sep 12 17:40:13.061593 containerd[1703]: time="2025-09-12T17:40:13.061516811Z" level=info msg="shim disconnected" id=e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6 namespace=k8s.io Sep 12 17:40:13.061593 containerd[1703]: time="2025-09-12T17:40:13.061584015Z" level=warning msg="cleaning up after shim disconnected" id=e4a029cef251ea2078699cd356e70f4a8e1ebb1383c16ec8a1faaeea0b42e8c6 namespace=k8s.io Sep 12 17:40:13.061593 containerd[1703]: time="2025-09-12T17:40:13.061596015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:13.932555 kubelet[3224]: E0912 17:40:13.931665 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:14.031775 containerd[1703]: time="2025-09-12T17:40:14.031483175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:40:15.934269 kubelet[3224]: E0912 17:40:15.934221 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:17.312230 kubelet[3224]: I0912 17:40:17.311814 3224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:17.932767 kubelet[3224]: E0912 17:40:17.932428 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:18.703962 containerd[1703]: time="2025-09-12T17:40:18.703919480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:18.706536 containerd[1703]: time="2025-09-12T17:40:18.706382730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:40:18.709384 containerd[1703]: time="2025-09-12T17:40:18.709333810Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:18.714462 containerd[1703]: time="2025-09-12T17:40:18.714399519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:18.715315 containerd[1703]: time="2025-09-12T17:40:18.715185467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.683656789s" Sep 12 17:40:18.715315 containerd[1703]: time="2025-09-12T17:40:18.715224369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:40:18.718430 containerd[1703]: time="2025-09-12T17:40:18.718119845Z" level=info msg="CreateContainer within sandbox \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:40:18.752304 containerd[1703]: time="2025-09-12T17:40:18.752256425Z" level=info msg="CreateContainer within sandbox \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16\"" Sep 12 17:40:18.753449 containerd[1703]: time="2025-09-12T17:40:18.753278688Z" level=info msg="StartContainer for \"39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16\"" Sep 12 17:40:18.789575 systemd[1]: run-containerd-runc-k8s.io-39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16-runc.rSsU7Z.mount: Deactivated successfully. Sep 12 17:40:18.799905 systemd[1]: Started cri-containerd-39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16.scope - libcontainer container 39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16. Sep 12 17:40:18.830698 containerd[1703]: time="2025-09-12T17:40:18.830507193Z" level=info msg="StartContainer for \"39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16\" returns successfully" Sep 12 17:40:19.933780 kubelet[3224]: E0912 17:40:19.931795 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:20.528343 containerd[1703]: time="2025-09-12T17:40:20.528219627Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:40:20.530801 systemd[1]: cri-containerd-39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16.scope: Deactivated successfully. Sep 12 17:40:20.553687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16-rootfs.mount: Deactivated successfully. Sep 12 17:40:20.562711 kubelet[3224]: I0912 17:40:20.562005 3224 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:40:20.608866 systemd[1]: Created slice kubepods-burstable-pod21f78540_0615_49df_a575_7f7ff1c93e62.slice - libcontainer container kubepods-burstable-pod21f78540_0615_49df_a575_7f7ff1c93e62.slice. Sep 12 17:40:20.624500 systemd[1]: Created slice kubepods-besteffort-pod879cd617_5321_4a0d_b131_eb344a4a1fbe.slice - libcontainer container kubepods-besteffort-pod879cd617_5321_4a0d_b131_eb344a4a1fbe.slice. Sep 12 17:40:21.152330 kubelet[3224]: W0912 17:40:20.639958 3224 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-a-da806c5a3d" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-a-da806c5a3d' and this object Sep 12 17:40:21.152330 kubelet[3224]: E0912 17:40:20.640027 3224 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-a-da806c5a3d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-a-da806c5a3d' and this object" logger="UnhandledError" Sep 12 17:40:21.152330 kubelet[3224]: W0912 17:40:20.646465 3224 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.6-a-da806c5a3d" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-a-da806c5a3d' and this object Sep 12 17:40:21.152330 kubelet[3224]: E0912 17:40:20.646504 3224 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.6-a-da806c5a3d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-a-da806c5a3d' and this object" logger="UnhandledError" Sep 12 17:40:20.634353 systemd[1]: Created slice kubepods-besteffort-pod78d725d5_0368_4c83_a47c_f7b5ec0c5f89.slice - libcontainer container kubepods-besteffort-pod78d725d5_0368_4c83_a47c_f7b5ec0c5f89.slice. Sep 12 17:40:21.153251 kubelet[3224]: I0912 17:40:20.690713 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbps\" (UniqueName: \"kubernetes.io/projected/66fb902f-f57e-4463-81fb-6c863e23cadb-kube-api-access-cdbps\") pod \"calico-kube-controllers-5bfb88c844-q2vh8\" (UID: \"66fb902f-f57e-4463-81fb-6c863e23cadb\") " pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" Sep 12 17:40:21.153251 kubelet[3224]: I0912 17:40:20.690769 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbsn8\" (UniqueName: \"kubernetes.io/projected/21f78540-0615-49df-a575-7f7ff1c93e62-kube-api-access-qbsn8\") pod \"coredns-668d6bf9bc-67d27\" (UID: \"21f78540-0615-49df-a575-7f7ff1c93e62\") " pod="kube-system/coredns-668d6bf9bc-67d27" Sep 12 17:40:21.153251 kubelet[3224]: I0912 17:40:20.690801 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63bab1f9-13e1-4261-8028-902eba3c8e04-config-volume\") pod \"coredns-668d6bf9bc-njvm2\" (UID: \"63bab1f9-13e1-4261-8028-902eba3c8e04\") " pod="kube-system/coredns-668d6bf9bc-njvm2" Sep 12 17:40:21.153251 kubelet[3224]: I0912 17:40:20.690823 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-backend-key-pair\") pod \"whisker-6dd6b9ffc6-zl92p\" (UID: \"879cd617-5321-4a0d-b131-eb344a4a1fbe\") " pod="calico-system/whisker-6dd6b9ffc6-zl92p" Sep 12 17:40:21.153251 kubelet[3224]: I0912 17:40:20.690845 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e55732e-c069-4255-903c-d2d265e35b14-calico-apiserver-certs\") pod \"calico-apiserver-fcb6c89c9-wrztx\" (UID: \"4e55732e-c069-4255-903c-d2d265e35b14\") " pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" Sep 12 17:40:20.665176 systemd[1]: Created slice kubepods-besteffort-poda75fb64e_a5f0_43ca_beae_d16320e589c8.slice - libcontainer container kubepods-besteffort-poda75fb64e_a5f0_43ca_beae_d16320e589c8.slice. Sep 12 17:40:21.153562 kubelet[3224]: I0912 17:40:20.690869 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46635db1-496e-4289-8758-6f4e732d4253-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hhnb5\" (UID: \"46635db1-496e-4289-8758-6f4e732d4253\") " pod="calico-system/goldmane-54d579b49d-hhnb5" Sep 12 17:40:21.153562 kubelet[3224]: I0912 17:40:20.690888 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k97\" (UniqueName: \"kubernetes.io/projected/46635db1-496e-4289-8758-6f4e732d4253-kube-api-access-q6k97\") pod \"goldmane-54d579b49d-hhnb5\" (UID: \"46635db1-496e-4289-8758-6f4e732d4253\") " pod="calico-system/goldmane-54d579b49d-hhnb5" Sep 12 17:40:21.153562 kubelet[3224]: I0912 17:40:20.690909 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21f78540-0615-49df-a575-7f7ff1c93e62-config-volume\") pod \"coredns-668d6bf9bc-67d27\" (UID: \"21f78540-0615-49df-a575-7f7ff1c93e62\") " pod="kube-system/coredns-668d6bf9bc-67d27" Sep 12 17:40:21.153562 kubelet[3224]: I0912 17:40:20.690932 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr99d\" (UniqueName: \"kubernetes.io/projected/63bab1f9-13e1-4261-8028-902eba3c8e04-kube-api-access-nr99d\") pod \"coredns-668d6bf9bc-njvm2\" (UID: \"63bab1f9-13e1-4261-8028-902eba3c8e04\") " pod="kube-system/coredns-668d6bf9bc-njvm2" Sep 12 17:40:21.153562 kubelet[3224]: I0912 17:40:20.690956 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66fb902f-f57e-4463-81fb-6c863e23cadb-tigera-ca-bundle\") pod \"calico-kube-controllers-5bfb88c844-q2vh8\" (UID: \"66fb902f-f57e-4463-81fb-6c863e23cadb\") " pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" Sep 12 17:40:20.675482 systemd[1]: Created slice kubepods-burstable-pod63bab1f9_13e1_4261_8028_902eba3c8e04.slice - libcontainer container kubepods-burstable-pod63bab1f9_13e1_4261_8028_902eba3c8e04.slice. Sep 12 17:40:21.153906 kubelet[3224]: I0912 17:40:20.691563 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/46635db1-496e-4289-8758-6f4e732d4253-goldmane-key-pair\") pod \"goldmane-54d579b49d-hhnb5\" (UID: \"46635db1-496e-4289-8758-6f4e732d4253\") " pod="calico-system/goldmane-54d579b49d-hhnb5" Sep 12 17:40:21.153906 kubelet[3224]: I0912 17:40:20.691621 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl7d8\" (UniqueName: \"kubernetes.io/projected/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-kube-api-access-cl7d8\") pod \"calico-apiserver-7f8c6cc887-x9kbt\" (UID: \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\") " pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" Sep 12 17:40:21.153906 kubelet[3224]: I0912 17:40:20.691646 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a75fb64e-a5f0-43ca-beae-d16320e589c8-calico-apiserver-certs\") pod \"calico-apiserver-7f8c6cc887-h9b8d\" (UID: \"a75fb64e-a5f0-43ca-beae-d16320e589c8\") " pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" Sep 12 17:40:21.153906 kubelet[3224]: I0912 17:40:20.691669 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5br\" (UniqueName: \"kubernetes.io/projected/a75fb64e-a5f0-43ca-beae-d16320e589c8-kube-api-access-7g5br\") pod \"calico-apiserver-7f8c6cc887-h9b8d\" (UID: \"a75fb64e-a5f0-43ca-beae-d16320e589c8\") " pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" Sep 12 17:40:21.153906 kubelet[3224]: I0912 17:40:20.691694 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-ca-bundle\") pod \"whisker-6dd6b9ffc6-zl92p\" (UID: \"879cd617-5321-4a0d-b131-eb344a4a1fbe\") " pod="calico-system/whisker-6dd6b9ffc6-zl92p" Sep 12 17:40:20.683879 systemd[1]: Created slice kubepods-besteffort-pod66fb902f_f57e_4463_81fb_6c863e23cadb.slice - libcontainer container kubepods-besteffort-pod66fb902f_f57e_4463_81fb_6c863e23cadb.slice. Sep 12 17:40:21.154235 kubelet[3224]: I0912 17:40:20.691719 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv42q\" (UniqueName: \"kubernetes.io/projected/879cd617-5321-4a0d-b131-eb344a4a1fbe-kube-api-access-mv42q\") pod \"whisker-6dd6b9ffc6-zl92p\" (UID: \"879cd617-5321-4a0d-b131-eb344a4a1fbe\") " pod="calico-system/whisker-6dd6b9ffc6-zl92p" Sep 12 17:40:21.154235 kubelet[3224]: I0912 17:40:20.691781 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-calico-apiserver-certs\") pod \"calico-apiserver-7f8c6cc887-x9kbt\" (UID: \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\") " pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" Sep 12 17:40:21.154235 kubelet[3224]: I0912 17:40:20.691811 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46635db1-496e-4289-8758-6f4e732d4253-config\") pod \"goldmane-54d579b49d-hhnb5\" (UID: \"46635db1-496e-4289-8758-6f4e732d4253\") " pod="calico-system/goldmane-54d579b49d-hhnb5" Sep 12 17:40:21.154235 kubelet[3224]: I0912 17:40:20.691841 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcs9\" (UniqueName: \"kubernetes.io/projected/4e55732e-c069-4255-903c-d2d265e35b14-kube-api-access-qpcs9\") pod \"calico-apiserver-fcb6c89c9-wrztx\" (UID: \"4e55732e-c069-4255-903c-d2d265e35b14\") " pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" Sep 12 17:40:20.696597 systemd[1]: Created slice kubepods-besteffort-pod4e55732e_c069_4255_903c_d2d265e35b14.slice - libcontainer container kubepods-besteffort-pod4e55732e_c069_4255_903c_d2d265e35b14.slice. Sep 12 17:40:20.705007 systemd[1]: Created slice kubepods-besteffort-pod46635db1_496e_4289_8758_6f4e732d4253.slice - libcontainer container kubepods-besteffort-pod46635db1_496e_4289_8758_6f4e732d4253.slice. Sep 12 17:40:21.454174 containerd[1703]: time="2025-09-12T17:40:21.452400333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67d27,Uid:21f78540-0615-49df-a575-7f7ff1c93e62,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:21.478592 containerd[1703]: time="2025-09-12T17:40:21.478542225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfb88c844-q2vh8,Uid:66fb902f-f57e-4463-81fb-6c863e23cadb,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:21.493075 containerd[1703]: time="2025-09-12T17:40:21.493026108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hhnb5,Uid:46635db1-496e-4289-8758-6f4e732d4253,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:21.500528 containerd[1703]: time="2025-09-12T17:40:21.500457761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-njvm2,Uid:63bab1f9-13e1-4261-8028-902eba3c8e04,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:21.795801 kubelet[3224]: E0912 17:40:21.795611 3224 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.795801 kubelet[3224]: E0912 17:40:21.795772 3224 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-ca-bundle podName:879cd617-5321-4a0d-b131-eb344a4a1fbe nodeName:}" failed. No retries permitted until 2025-09-12 17:40:22.295715149 +0000 UTC m=+36.477841904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-ca-bundle") pod "whisker-6dd6b9ffc6-zl92p" (UID: "879cd617-5321-4a0d-b131-eb344a4a1fbe") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.802051 containerd[1703]: time="2025-09-12T17:40:21.801803420Z" level=info msg="shim disconnected" id=39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16 namespace=k8s.io Sep 12 17:40:21.802051 containerd[1703]: time="2025-09-12T17:40:21.801866024Z" level=warning msg="cleaning up after shim disconnected" id=39d0a0351a5f8ae997dfc6f60bd16b9d5992e467ae18a48d0d1f830253ae8d16 namespace=k8s.io Sep 12 17:40:21.802051 containerd[1703]: time="2025-09-12T17:40:21.801876925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:21.808540 kubelet[3224]: E0912 17:40:21.806527 3224 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.808540 kubelet[3224]: E0912 17:40:21.806563 3224 projected.go:194] Error preparing data for projected volume kube-api-access-qpcs9 for pod calico-apiserver/calico-apiserver-fcb6c89c9-wrztx: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.808540 kubelet[3224]: E0912 17:40:21.806632 3224 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e55732e-c069-4255-903c-d2d265e35b14-kube-api-access-qpcs9 podName:4e55732e-c069-4255-903c-d2d265e35b14 nodeName:}" failed. No retries permitted until 2025-09-12 17:40:22.306608813 +0000 UTC m=+36.488735568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpcs9" (UniqueName: "kubernetes.io/projected/4e55732e-c069-4255-903c-d2d265e35b14-kube-api-access-qpcs9") pod "calico-apiserver-fcb6c89c9-wrztx" (UID: "4e55732e-c069-4255-903c-d2d265e35b14") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.814856 kubelet[3224]: E0912 17:40:21.814815 3224 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.814856 kubelet[3224]: E0912 17:40:21.814859 3224 projected.go:194] Error preparing data for projected volume kube-api-access-7g5br for pod calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.815024 kubelet[3224]: E0912 17:40:21.814917 3224 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a75fb64e-a5f0-43ca-beae-d16320e589c8-kube-api-access-7g5br podName:a75fb64e-a5f0-43ca-beae-d16320e589c8 nodeName:}" failed. No retries permitted until 2025-09-12 17:40:22.314896218 +0000 UTC m=+36.497022973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7g5br" (UniqueName: "kubernetes.io/projected/a75fb64e-a5f0-43ca-beae-d16320e589c8-kube-api-access-7g5br") pod "calico-apiserver-7f8c6cc887-h9b8d" (UID: "a75fb64e-a5f0-43ca-beae-d16320e589c8") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.815850 kubelet[3224]: E0912 17:40:21.815721 3224 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.815850 kubelet[3224]: E0912 17:40:21.815778 3224 projected.go:194] Error preparing data for projected volume kube-api-access-cl7d8 for pod calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.816326 kubelet[3224]: E0912 17:40:21.815860 3224 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-kube-api-access-cl7d8 podName:78d725d5-0368-4c83-a47c-f7b5ec0c5f89 nodeName:}" failed. No retries permitted until 2025-09-12 17:40:22.315841175 +0000 UTC m=+36.497967930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cl7d8" (UniqueName: "kubernetes.io/projected/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-kube-api-access-cl7d8") pod "calico-apiserver-7f8c6cc887-x9kbt" (UID: "78d725d5-0368-4c83-a47c-f7b5ec0c5f89") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:40:21.945019 systemd[1]: Created slice kubepods-besteffort-pod094c632a_fdc9_41e5_8058_f2de8effbf33.slice - libcontainer container kubepods-besteffort-pod094c632a_fdc9_41e5_8058_f2de8effbf33.slice. Sep 12 17:40:21.951622 containerd[1703]: time="2025-09-12T17:40:21.951306129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4nvw,Uid:094c632a-fdc9-41e5-8058-f2de8effbf33,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:22.055819 containerd[1703]: time="2025-09-12T17:40:22.055570381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:40:22.076280 containerd[1703]: time="2025-09-12T17:40:22.076135134Z" level=error msg="Failed to destroy network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.076903 containerd[1703]: time="2025-09-12T17:40:22.076690668Z" level=error msg="encountered an error cleaning up failed sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.076903 containerd[1703]: time="2025-09-12T17:40:22.076788674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfb88c844-q2vh8,Uid:66fb902f-f57e-4463-81fb-6c863e23cadb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.077849 kubelet[3224]: E0912 17:40:22.077346 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.077849 kubelet[3224]: E0912 17:40:22.077426 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" Sep 12 17:40:22.077849 kubelet[3224]: E0912 17:40:22.077453 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" Sep 12 17:40:22.078047 kubelet[3224]: E0912 17:40:22.077510 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bfb88c844-q2vh8_calico-system(66fb902f-f57e-4463-81fb-6c863e23cadb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bfb88c844-q2vh8_calico-system(66fb902f-f57e-4463-81fb-6c863e23cadb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" podUID="66fb902f-f57e-4463-81fb-6c863e23cadb" Sep 12 17:40:22.112986 containerd[1703]: time="2025-09-12T17:40:22.112934976Z" level=error msg="Failed to destroy network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.113932 containerd[1703]: time="2025-09-12T17:40:22.113886434Z" level=error msg="encountered an error cleaning up failed sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.115563 containerd[1703]: time="2025-09-12T17:40:22.113965539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hhnb5,Uid:46635db1-496e-4289-8758-6f4e732d4253,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.115724 kubelet[3224]: E0912 17:40:22.114153 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.115724 kubelet[3224]: E0912 17:40:22.114204 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hhnb5" Sep 12 17:40:22.115724 kubelet[3224]: E0912 17:40:22.114232 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hhnb5" Sep 12 17:40:22.116026 kubelet[3224]: E0912 17:40:22.114277 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hhnb5_calico-system(46635db1-496e-4289-8758-6f4e732d4253)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hhnb5_calico-system(46635db1-496e-4289-8758-6f4e732d4253)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hhnb5" podUID="46635db1-496e-4289-8758-6f4e732d4253" Sep 12 17:40:22.122569 containerd[1703]: time="2025-09-12T17:40:22.122527760Z" level=error msg="Failed to destroy network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.122979 containerd[1703]: time="2025-09-12T17:40:22.122934785Z" level=error msg="encountered an error cleaning up failed sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.123056 containerd[1703]: time="2025-09-12T17:40:22.123015890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67d27,Uid:21f78540-0615-49df-a575-7f7ff1c93e62,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.123660 kubelet[3224]: E0912 17:40:22.123242 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.123660 kubelet[3224]: E0912 17:40:22.123295 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-67d27" Sep 12 17:40:22.123660 kubelet[3224]: E0912 17:40:22.123319 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-67d27" Sep 12 17:40:22.124076 kubelet[3224]: E0912 17:40:22.123368 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-67d27_kube-system(21f78540-0615-49df-a575-7f7ff1c93e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-67d27_kube-system(21f78540-0615-49df-a575-7f7ff1c93e62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-67d27" podUID="21f78540-0615-49df-a575-7f7ff1c93e62" Sep 12 17:40:22.126240 containerd[1703]: time="2025-09-12T17:40:22.126208385Z" level=error msg="Failed to destroy network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.126502 containerd[1703]: time="2025-09-12T17:40:22.126467501Z" level=error msg="encountered an error cleaning up failed sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.126576 containerd[1703]: time="2025-09-12T17:40:22.126519904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-njvm2,Uid:63bab1f9-13e1-4261-8028-902eba3c8e04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.126982 kubelet[3224]: E0912 17:40:22.126717 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.126982 kubelet[3224]: E0912 17:40:22.126802 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-njvm2" Sep 12 17:40:22.126982 kubelet[3224]: E0912 17:40:22.126828 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-njvm2" Sep 12 17:40:22.127134 kubelet[3224]: E0912 17:40:22.126875 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-njvm2_kube-system(63bab1f9-13e1-4261-8028-902eba3c8e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-njvm2_kube-system(63bab1f9-13e1-4261-8028-902eba3c8e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-njvm2" podUID="63bab1f9-13e1-4261-8028-902eba3c8e04" Sep 12 17:40:22.145977 containerd[1703]: time="2025-09-12T17:40:22.145921886Z" level=error msg="Failed to destroy network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.146278 containerd[1703]: time="2025-09-12T17:40:22.146248006Z" level=error msg="encountered an error cleaning up failed sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.146385 containerd[1703]: time="2025-09-12T17:40:22.146308909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4nvw,Uid:094c632a-fdc9-41e5-8058-f2de8effbf33,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.146559 kubelet[3224]: E0912 17:40:22.146519 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.146657 kubelet[3224]: E0912 17:40:22.146576 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:22.146657 kubelet[3224]: E0912 17:40:22.146602 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k4nvw" Sep 12 17:40:22.146768 kubelet[3224]: E0912 17:40:22.146665 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k4nvw_calico-system(094c632a-fdc9-41e5-8058-f2de8effbf33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k4nvw_calico-system(094c632a-fdc9-41e5-8058-f2de8effbf33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:22.357799 containerd[1703]: time="2025-09-12T17:40:22.357319365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd6b9ffc6-zl92p,Uid:879cd617-5321-4a0d-b131-eb344a4a1fbe,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:22.366950 containerd[1703]: time="2025-09-12T17:40:22.366911650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb6c89c9-wrztx,Uid:4e55732e-c069-4255-903c-d2d265e35b14,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:22.507020 containerd[1703]: time="2025-09-12T17:40:22.506972983Z" level=error msg="Failed to destroy network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.507555 containerd[1703]: time="2025-09-12T17:40:22.507521616Z" level=error msg="encountered an error cleaning up failed sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.507674 containerd[1703]: time="2025-09-12T17:40:22.507596621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd6b9ffc6-zl92p,Uid:879cd617-5321-4a0d-b131-eb344a4a1fbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.509072 kubelet[3224]: E0912 17:40:22.507867 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.509072 kubelet[3224]: E0912 17:40:22.507938 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd6b9ffc6-zl92p" Sep 12 17:40:22.509072 kubelet[3224]: E0912 17:40:22.507981 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd6b9ffc6-zl92p" Sep 12 17:40:22.509552 kubelet[3224]: E0912 17:40:22.508043 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd6b9ffc6-zl92p_calico-system(879cd617-5321-4a0d-b131-eb344a4a1fbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd6b9ffc6-zl92p_calico-system(879cd617-5321-4a0d-b131-eb344a4a1fbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd6b9ffc6-zl92p" podUID="879cd617-5321-4a0d-b131-eb344a4a1fbe" Sep 12 17:40:22.517814 containerd[1703]: time="2025-09-12T17:40:22.517754440Z" level=error msg="Failed to destroy network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.518119 containerd[1703]: time="2025-09-12T17:40:22.518086860Z" level=error msg="encountered an error cleaning up failed sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.518211 containerd[1703]: time="2025-09-12T17:40:22.518148464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb6c89c9-wrztx,Uid:4e55732e-c069-4255-903c-d2d265e35b14,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.518463 kubelet[3224]: E0912 17:40:22.518429 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.518557 kubelet[3224]: E0912 17:40:22.518492 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" Sep 12 17:40:22.518557 kubelet[3224]: E0912 17:40:22.518521 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" Sep 12 17:40:22.518686 kubelet[3224]: E0912 17:40:22.518580 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fcb6c89c9-wrztx_calico-apiserver(4e55732e-c069-4255-903c-d2d265e35b14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fcb6c89c9-wrztx_calico-apiserver(4e55732e-c069-4255-903c-d2d265e35b14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" podUID="4e55732e-c069-4255-903c-d2d265e35b14" Sep 12 17:40:22.558780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c-shm.mount: Deactivated successfully. Sep 12 17:40:22.558909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5-shm.mount: Deactivated successfully. Sep 12 17:40:22.558986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc-shm.mount: Deactivated successfully. Sep 12 17:40:22.559072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc-shm.mount: Deactivated successfully. Sep 12 17:40:22.559156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d-shm.mount: Deactivated successfully. Sep 12 17:40:22.667307 containerd[1703]: time="2025-09-12T17:40:22.667259949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-h9b8d,Uid:a75fb64e-a5f0-43ca-beae-d16320e589c8,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:22.679242 containerd[1703]: time="2025-09-12T17:40:22.679200176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-x9kbt,Uid:78d725d5-0368-4c83-a47c-f7b5ec0c5f89,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:22.778192 containerd[1703]: time="2025-09-12T17:40:22.778143304Z" level=error msg="Failed to destroy network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.779215 containerd[1703]: time="2025-09-12T17:40:22.779063160Z" level=error msg="encountered an error cleaning up failed sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.779215 containerd[1703]: time="2025-09-12T17:40:22.779153866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-h9b8d,Uid:a75fb64e-a5f0-43ca-beae-d16320e589c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.780188 kubelet[3224]: E0912 17:40:22.779962 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.780188 kubelet[3224]: E0912 17:40:22.780047 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" Sep 12 17:40:22.780188 kubelet[3224]: E0912 17:40:22.780078 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" Sep 12 17:40:22.781653 kubelet[3224]: E0912 17:40:22.780156 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f8c6cc887-h9b8d_calico-apiserver(a75fb64e-a5f0-43ca-beae-d16320e589c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f8c6cc887-h9b8d_calico-apiserver(a75fb64e-a5f0-43ca-beae-d16320e589c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" podUID="a75fb64e-a5f0-43ca-beae-d16320e589c8" Sep 12 17:40:22.781883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b-shm.mount: Deactivated successfully. Sep 12 17:40:22.806219 containerd[1703]: time="2025-09-12T17:40:22.806165911Z" level=error msg="Failed to destroy network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.806700 containerd[1703]: time="2025-09-12T17:40:22.806539834Z" level=error msg="encountered an error cleaning up failed sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.806700 containerd[1703]: time="2025-09-12T17:40:22.806608338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-x9kbt,Uid:78d725d5-0368-4c83-a47c-f7b5ec0c5f89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.806968 kubelet[3224]: E0912 17:40:22.806929 3224 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:22.807047 kubelet[3224]: E0912 17:40:22.806993 3224 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" Sep 12 17:40:22.807047 kubelet[3224]: E0912 17:40:22.807028 3224 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" Sep 12 17:40:22.807188 kubelet[3224]: E0912 17:40:22.807087 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f8c6cc887-x9kbt_calico-apiserver(78d725d5-0368-4c83-a47c-f7b5ec0c5f89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f8c6cc887-x9kbt_calico-apiserver(78d725d5-0368-4c83-a47c-f7b5ec0c5f89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" podUID="78d725d5-0368-4c83-a47c-f7b5ec0c5f89" Sep 12 17:40:23.053607 kubelet[3224]: I0912 17:40:23.053088 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:23.054554 containerd[1703]: time="2025-09-12T17:40:23.053961408Z" level=info msg="StopPodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\"" Sep 12 17:40:23.054554 containerd[1703]: time="2025-09-12T17:40:23.054182922Z" level=info msg="Ensure that sandbox 1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc in task-service has been cleanup successfully" Sep 12 17:40:23.057886 kubelet[3224]: I0912 17:40:23.057818 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:23.058775 containerd[1703]: time="2025-09-12T17:40:23.058366077Z" level=info msg="StopPodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\"" Sep 12 17:40:23.058775 containerd[1703]: time="2025-09-12T17:40:23.058580090Z" level=info msg="Ensure that sandbox 0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d in task-service has been cleanup successfully" Sep 12 17:40:23.060017 kubelet[3224]: I0912 17:40:23.059983 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:23.064467 containerd[1703]: time="2025-09-12T17:40:23.064428246Z" level=info msg="StopPodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\"" Sep 12 17:40:23.064647 containerd[1703]: time="2025-09-12T17:40:23.064624158Z" level=info msg="Ensure that sandbox df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27 in task-service has been cleanup successfully" Sep 12 17:40:23.066885 kubelet[3224]: I0912 17:40:23.066840 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:23.067910 containerd[1703]: time="2025-09-12T17:40:23.067879056Z" level=info msg="StopPodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\"" Sep 12 17:40:23.070252 containerd[1703]: time="2025-09-12T17:40:23.070084891Z" level=info msg="Ensure that sandbox 220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c in task-service has been cleanup successfully" Sep 12 17:40:23.075060 kubelet[3224]: I0912 17:40:23.074721 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:23.075826 containerd[1703]: time="2025-09-12T17:40:23.075768637Z" level=info msg="StopPodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\"" Sep 12 17:40:23.076871 containerd[1703]: time="2025-09-12T17:40:23.076732796Z" level=info msg="Ensure that sandbox 3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2 in task-service has been cleanup successfully" Sep 12 17:40:23.085913 kubelet[3224]: I0912 17:40:23.085883 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:23.088010 containerd[1703]: time="2025-09-12T17:40:23.087799570Z" level=info msg="StopPodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\"" Sep 12 17:40:23.088659 containerd[1703]: time="2025-09-12T17:40:23.088400007Z" level=info msg="Ensure that sandbox 4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b in task-service has been cleanup successfully" Sep 12 17:40:23.090710 kubelet[3224]: I0912 17:40:23.090688 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:23.091392 containerd[1703]: time="2025-09-12T17:40:23.091345086Z" level=info msg="StopPodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\"" Sep 12 17:40:23.093225 containerd[1703]: time="2025-09-12T17:40:23.093171097Z" level=info msg="Ensure that sandbox e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5 in task-service has been cleanup successfully" Sep 12 17:40:23.098824 kubelet[3224]: I0912 17:40:23.098786 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:23.104399 containerd[1703]: time="2025-09-12T17:40:23.103934953Z" level=info msg="StopPodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\"" Sep 12 17:40:23.104399 containerd[1703]: time="2025-09-12T17:40:23.104134665Z" level=info msg="Ensure that sandbox d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc in task-service has been cleanup successfully" Sep 12 17:40:23.121013 kubelet[3224]: I0912 17:40:23.120976 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:23.124943 containerd[1703]: time="2025-09-12T17:40:23.124902331Z" level=info msg="StopPodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\"" Sep 12 17:40:23.125336 containerd[1703]: time="2025-09-12T17:40:23.125108743Z" level=info msg="Ensure that sandbox 4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03 in task-service has been cleanup successfully" Sep 12 17:40:23.231089 containerd[1703]: time="2025-09-12T17:40:23.231013795Z" level=error msg="StopPodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" failed" error="failed to destroy network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.231975 kubelet[3224]: E0912 17:40:23.231668 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:23.231975 kubelet[3224]: E0912 17:40:23.231766 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc"} Sep 12 17:40:23.231975 kubelet[3224]: E0912 17:40:23.231856 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66fb902f-f57e-4463-81fb-6c863e23cadb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.231975 kubelet[3224]: E0912 17:40:23.231888 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66fb902f-f57e-4463-81fb-6c863e23cadb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" podUID="66fb902f-f57e-4463-81fb-6c863e23cadb" Sep 12 17:40:23.240458 containerd[1703]: time="2025-09-12T17:40:23.240406168Z" level=error msg="StopPodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" failed" error="failed to destroy network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.241760 kubelet[3224]: E0912 17:40:23.240650 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:23.241760 kubelet[3224]: E0912 17:40:23.240709 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27"} Sep 12 17:40:23.241760 kubelet[3224]: E0912 17:40:23.240765 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e55732e-c069-4255-903c-d2d265e35b14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.241760 kubelet[3224]: E0912 17:40:23.240796 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e55732e-c069-4255-903c-d2d265e35b14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" podUID="4e55732e-c069-4255-903c-d2d265e35b14" Sep 12 17:40:23.245673 containerd[1703]: time="2025-09-12T17:40:23.245370870Z" level=error msg="StopPodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" failed" error="failed to destroy network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.249079 kubelet[3224]: E0912 17:40:23.248906 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:23.249079 kubelet[3224]: E0912 17:40:23.248959 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d"} Sep 12 17:40:23.249079 kubelet[3224]: E0912 17:40:23.249004 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21f78540-0615-49df-a575-7f7ff1c93e62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.249079 kubelet[3224]: E0912 17:40:23.249045 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21f78540-0615-49df-a575-7f7ff1c93e62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-67d27" podUID="21f78540-0615-49df-a575-7f7ff1c93e62" Sep 12 17:40:23.264926 containerd[1703]: time="2025-09-12T17:40:23.264868958Z" level=error msg="StopPodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" failed" error="failed to destroy network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.265281 kubelet[3224]: E0912 17:40:23.265117 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:23.265281 kubelet[3224]: E0912 17:40:23.265172 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5"} Sep 12 17:40:23.265281 kubelet[3224]: E0912 17:40:23.265221 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63bab1f9-13e1-4261-8028-902eba3c8e04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.265281 kubelet[3224]: E0912 17:40:23.265251 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63bab1f9-13e1-4261-8028-902eba3c8e04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-njvm2" podUID="63bab1f9-13e1-4261-8028-902eba3c8e04" Sep 12 17:40:23.267427 containerd[1703]: time="2025-09-12T17:40:23.267384711Z" level=error msg="StopPodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" failed" error="failed to destroy network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.267888 kubelet[3224]: E0912 17:40:23.267848 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:23.268039 kubelet[3224]: E0912 17:40:23.268022 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03"} Sep 12 17:40:23.268185 kubelet[3224]: E0912 17:40:23.268125 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"879cd617-5321-4a0d-b131-eb344a4a1fbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.268185 kubelet[3224]: E0912 17:40:23.268156 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"879cd617-5321-4a0d-b131-eb344a4a1fbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd6b9ffc6-zl92p" podUID="879cd617-5321-4a0d-b131-eb344a4a1fbe" Sep 12 17:40:23.276822 containerd[1703]: time="2025-09-12T17:40:23.276769083Z" level=error msg="StopPodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" failed" error="failed to destroy network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.277004 containerd[1703]: time="2025-09-12T17:40:23.276799085Z" level=error msg="StopPodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" failed" error="failed to destroy network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.277155 kubelet[3224]: E0912 17:40:23.277125 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:23.277234 kubelet[3224]: E0912 17:40:23.277167 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c"} Sep 12 17:40:23.277234 kubelet[3224]: E0912 17:40:23.277203 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"094c632a-fdc9-41e5-8058-f2de8effbf33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.277350 kubelet[3224]: E0912 17:40:23.277238 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"094c632a-fdc9-41e5-8058-f2de8effbf33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k4nvw" podUID="094c632a-fdc9-41e5-8058-f2de8effbf33" Sep 12 17:40:23.277350 kubelet[3224]: E0912 17:40:23.277276 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:23.277350 kubelet[3224]: E0912 17:40:23.277297 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b"} Sep 12 17:40:23.277350 kubelet[3224]: E0912 17:40:23.277323 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a75fb64e-a5f0-43ca-beae-d16320e589c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.277565 kubelet[3224]: E0912 17:40:23.277347 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a75fb64e-a5f0-43ca-beae-d16320e589c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" podUID="a75fb64e-a5f0-43ca-beae-d16320e589c8" Sep 12 17:40:23.277646 containerd[1703]: time="2025-09-12T17:40:23.277461325Z" level=error msg="StopPodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" failed" error="failed to destroy network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.277767 kubelet[3224]: E0912 17:40:23.277711 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:23.278829 kubelet[3224]: E0912 17:40:23.278800 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2"} Sep 12 17:40:23.278921 kubelet[3224]: E0912 17:40:23.278842 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.278921 kubelet[3224]: E0912 17:40:23.278867 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" podUID="78d725d5-0368-4c83-a47c-f7b5ec0c5f89" Sep 12 17:40:23.280484 containerd[1703]: time="2025-09-12T17:40:23.280450107Z" level=error msg="StopPodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" failed" error="failed to destroy network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:23.280630 kubelet[3224]: E0912 17:40:23.280599 3224 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:23.280710 kubelet[3224]: E0912 17:40:23.280633 3224 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc"} Sep 12 17:40:23.280710 kubelet[3224]: E0912 17:40:23.280665 3224 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46635db1-496e-4289-8758-6f4e732d4253\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:23.280710 kubelet[3224]: E0912 17:40:23.280693 3224 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46635db1-496e-4289-8758-6f4e732d4253\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hhnb5" podUID="46635db1-496e-4289-8758-6f4e732d4253" Sep 12 17:40:23.552456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2-shm.mount: Deactivated successfully. Sep 12 17:40:30.773488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259330164.mount: Deactivated successfully. Sep 12 17:40:30.806161 containerd[1703]: time="2025-09-12T17:40:30.806104397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:30.808953 containerd[1703]: time="2025-09-12T17:40:30.808775361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:40:30.811530 containerd[1703]: time="2025-09-12T17:40:30.811492227Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:30.816390 containerd[1703]: time="2025-09-12T17:40:30.815692284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:30.816390 containerd[1703]: time="2025-09-12T17:40:30.816239217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.759381457s" Sep 12 17:40:30.816390 containerd[1703]: time="2025-09-12T17:40:30.816276120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:40:30.825189 containerd[1703]: time="2025-09-12T17:40:30.825141262Z" level=info msg="CreateContainer within sandbox \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:40:30.863159 containerd[1703]: time="2025-09-12T17:40:30.863115185Z" level=info msg="CreateContainer within sandbox \"b52a43bf9d043290e5d952e83655fc1f3f424e5581a42d2289a026c30175d863\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb\"" Sep 12 17:40:30.866000 containerd[1703]: time="2025-09-12T17:40:30.864625178Z" level=info msg="StartContainer for \"2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb\"" Sep 12 17:40:30.894905 systemd[1]: Started cri-containerd-2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb.scope - libcontainer container 2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb. Sep 12 17:40:30.927108 containerd[1703]: time="2025-09-12T17:40:30.927000894Z" level=info msg="StartContainer for \"2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb\" returns successfully" Sep 12 17:40:31.423384 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:40:31.423544 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:40:31.544788 kubelet[3224]: I0912 17:40:31.543764 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f66xw" podStartSLOduration=1.350421432 podStartE2EDuration="25.543717225s" podCreationTimestamp="2025-09-12 17:40:06 +0000 UTC" firstStartedPulling="2025-09-12 17:40:06.623998689 +0000 UTC m=+20.806125544" lastFinishedPulling="2025-09-12 17:40:30.817294482 +0000 UTC m=+44.999421337" observedRunningTime="2025-09-12 17:40:31.181973393 +0000 UTC m=+45.364100148" watchObservedRunningTime="2025-09-12 17:40:31.543717225 +0000 UTC m=+45.725844080" Sep 12 17:40:31.545665 containerd[1703]: time="2025-09-12T17:40:31.545232318Z" level=info msg="StopPodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\"" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.666 [INFO][4541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.666 [INFO][4541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" iface="eth0" netns="/var/run/netns/cni-d1f6e256-a263-1f90-af35-8efd49b49ea9" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.666 [INFO][4541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" iface="eth0" netns="/var/run/netns/cni-d1f6e256-a263-1f90-af35-8efd49b49ea9" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.667 [INFO][4541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" iface="eth0" netns="/var/run/netns/cni-d1f6e256-a263-1f90-af35-8efd49b49ea9" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.667 [INFO][4541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.667 [INFO][4541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.718 [INFO][4548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.718 [INFO][4548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.718 [INFO][4548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.728 [WARNING][4548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.728 [INFO][4548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.729 [INFO][4548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.738024 containerd[1703]: 2025-09-12 17:40:31.734 [INFO][4541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:31.740486 containerd[1703]: time="2025-09-12T17:40:31.738117219Z" level=info msg="TearDown network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" successfully" Sep 12 17:40:31.740486 containerd[1703]: time="2025-09-12T17:40:31.738162922Z" level=info msg="StopPodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" returns successfully" Sep 12 17:40:31.773817 systemd[1]: run-netns-cni\x2dd1f6e256\x2da263\x2d1f90\x2daf35\x2d8efd49b49ea9.mount: Deactivated successfully. Sep 12 17:40:31.783312 kubelet[3224]: I0912 17:40:31.782713 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-ca-bundle\") pod \"879cd617-5321-4a0d-b131-eb344a4a1fbe\" (UID: \"879cd617-5321-4a0d-b131-eb344a4a1fbe\") " Sep 12 17:40:31.783312 kubelet[3224]: I0912 17:40:31.782781 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv42q\" (UniqueName: \"kubernetes.io/projected/879cd617-5321-4a0d-b131-eb344a4a1fbe-kube-api-access-mv42q\") pod \"879cd617-5321-4a0d-b131-eb344a4a1fbe\" (UID: \"879cd617-5321-4a0d-b131-eb344a4a1fbe\") " Sep 12 17:40:31.783312 kubelet[3224]: I0912 17:40:31.782837 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-backend-key-pair\") pod \"879cd617-5321-4a0d-b131-eb344a4a1fbe\" (UID: \"879cd617-5321-4a0d-b131-eb344a4a1fbe\") " Sep 12 17:40:31.784215 kubelet[3224]: I0912 17:40:31.783925 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "879cd617-5321-4a0d-b131-eb344a4a1fbe" (UID: "879cd617-5321-4a0d-b131-eb344a4a1fbe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:40:31.791774 kubelet[3224]: I0912 17:40:31.791747 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "879cd617-5321-4a0d-b131-eb344a4a1fbe" (UID: "879cd617-5321-4a0d-b131-eb344a4a1fbe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:40:31.794226 kubelet[3224]: I0912 17:40:31.791722 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879cd617-5321-4a0d-b131-eb344a4a1fbe-kube-api-access-mv42q" (OuterVolumeSpecName: "kube-api-access-mv42q") pod "879cd617-5321-4a0d-b131-eb344a4a1fbe" (UID: "879cd617-5321-4a0d-b131-eb344a4a1fbe"). InnerVolumeSpecName "kube-api-access-mv42q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:40:31.793904 systemd[1]: var-lib-kubelet-pods-879cd617\x2d5321\x2d4a0d\x2db131\x2deb344a4a1fbe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmv42q.mount: Deactivated successfully. Sep 12 17:40:31.794019 systemd[1]: var-lib-kubelet-pods-879cd617\x2d5321\x2d4a0d\x2db131\x2deb344a4a1fbe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:40:31.883924 kubelet[3224]: I0912 17:40:31.883829 3224 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-backend-key-pair\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:40:31.883924 kubelet[3224]: I0912 17:40:31.883874 3224 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879cd617-5321-4a0d-b131-eb344a4a1fbe-whisker-ca-bundle\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:40:31.883924 kubelet[3224]: I0912 17:40:31.883888 3224 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mv42q\" (UniqueName: \"kubernetes.io/projected/879cd617-5321-4a0d-b131-eb344a4a1fbe-kube-api-access-mv42q\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:40:31.940152 systemd[1]: Removed slice kubepods-besteffort-pod879cd617_5321_4a0d_b131_eb344a4a1fbe.slice - libcontainer container kubepods-besteffort-pod879cd617_5321_4a0d_b131_eb344a4a1fbe.slice. Sep 12 17:40:32.248152 systemd[1]: Created slice kubepods-besteffort-podffb8949a_32f9_4e06_b497_3c6591d5f88b.slice - libcontainer container kubepods-besteffort-podffb8949a_32f9_4e06_b497_3c6591d5f88b.slice. Sep 12 17:40:32.288154 kubelet[3224]: I0912 17:40:32.288108 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffb8949a-32f9-4e06-b497-3c6591d5f88b-whisker-ca-bundle\") pod \"whisker-77b4c58d6c-r968s\" (UID: \"ffb8949a-32f9-4e06-b497-3c6591d5f88b\") " pod="calico-system/whisker-77b4c58d6c-r968s" Sep 12 17:40:32.289884 kubelet[3224]: I0912 17:40:32.289845 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffb8949a-32f9-4e06-b497-3c6591d5f88b-whisker-backend-key-pair\") pod \"whisker-77b4c58d6c-r968s\" (UID: \"ffb8949a-32f9-4e06-b497-3c6591d5f88b\") " pod="calico-system/whisker-77b4c58d6c-r968s" Sep 12 17:40:32.290220 kubelet[3224]: I0912 17:40:32.289907 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7szl\" (UniqueName: \"kubernetes.io/projected/ffb8949a-32f9-4e06-b497-3c6591d5f88b-kube-api-access-q7szl\") pod \"whisker-77b4c58d6c-r968s\" (UID: \"ffb8949a-32f9-4e06-b497-3c6591d5f88b\") " pod="calico-system/whisker-77b4c58d6c-r968s" Sep 12 17:40:32.553767 containerd[1703]: time="2025-09-12T17:40:32.553622912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77b4c58d6c-r968s,Uid:ffb8949a-32f9-4e06-b497-3c6591d5f88b,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:32.721896 systemd-networkd[1594]: cali7a8c57afe34: Link UP Sep 12 17:40:32.722188 systemd-networkd[1594]: cali7a8c57afe34: Gained carrier Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.605 [INFO][4598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.614 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0 whisker-77b4c58d6c- calico-system ffb8949a-32f9-4e06-b497-3c6591d5f88b 951 0 2025-09-12 17:40:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77b4c58d6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d whisker-77b4c58d6c-r968s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7a8c57afe34 [] [] }} ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.615 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.641 [INFO][4609] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" HandleID="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.641 [INFO][4609] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" HandleID="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"whisker-77b4c58d6c-r968s", "timestamp":"2025-09-12 17:40:32.641080263 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.641 [INFO][4609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.641 [INFO][4609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.641 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.647 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.651 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.654 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.655 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.657 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.657 [INFO][4609] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.658 [INFO][4609] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.666 [INFO][4609] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.673 [INFO][4609] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.65/26] block=192.168.126.64/26 handle="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.673 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.65/26] handle="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.673 [INFO][4609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:32.743216 containerd[1703]: 2025-09-12 17:40:32.673 [INFO][4609] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.65/26] IPv6=[] ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" HandleID="k8s-pod-network.1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.744574 containerd[1703]: 2025-09-12 17:40:32.676 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0", GenerateName:"whisker-77b4c58d6c-", Namespace:"calico-system", SelfLink:"", UID:"ffb8949a-32f9-4e06-b497-3c6591d5f88b", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77b4c58d6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"whisker-77b4c58d6c-r968s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a8c57afe34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:32.744574 containerd[1703]: 2025-09-12 17:40:32.676 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.65/32] ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.744574 containerd[1703]: 2025-09-12 17:40:32.676 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a8c57afe34 ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.744574 containerd[1703]: 2025-09-12 17:40:32.722 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.744574 containerd[1703]: 2025-09-12 17:40:32.723 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0", GenerateName:"whisker-77b4c58d6c-", Namespace:"calico-system", SelfLink:"", UID:"ffb8949a-32f9-4e06-b497-3c6591d5f88b", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77b4c58d6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd", Pod:"whisker-77b4c58d6c-r968s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a8c57afe34", MAC:"ca:5a:74:51:73:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:32.744574 containerd[1703]: 2025-09-12 17:40:32.740 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd" Namespace="calico-system" Pod="whisker-77b4c58d6c-r968s" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--77b4c58d6c--r968s-eth0" Sep 12 17:40:32.772054 containerd[1703]: time="2025-09-12T17:40:32.771776559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:32.772054 containerd[1703]: time="2025-09-12T17:40:32.771846363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:32.772054 containerd[1703]: time="2025-09-12T17:40:32.771871665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:32.772054 containerd[1703]: time="2025-09-12T17:40:32.771952570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:32.810969 systemd[1]: Started cri-containerd-1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd.scope - libcontainer container 1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd. Sep 12 17:40:32.851208 containerd[1703]: time="2025-09-12T17:40:32.851120513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77b4c58d6c-r968s,Uid:ffb8949a-32f9-4e06-b497-3c6591d5f88b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd\"" Sep 12 17:40:32.854216 containerd[1703]: time="2025-09-12T17:40:32.854178400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:40:33.240814 kernel: bpftool[4763]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:40:33.703898 systemd-networkd[1594]: vxlan.calico: Link UP Sep 12 17:40:33.703909 systemd-networkd[1594]: vxlan.calico: Gained carrier Sep 12 17:40:33.933769 containerd[1703]: time="2025-09-12T17:40:33.933013905Z" level=info msg="StopPodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\"" Sep 12 17:40:33.935579 containerd[1703]: time="2025-09-12T17:40:33.933964363Z" level=info msg="StopPodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\"" Sep 12 17:40:33.948460 kubelet[3224]: I0912 17:40:33.948418 3224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879cd617-5321-4a0d-b131-eb344a4a1fbe" path="/var/lib/kubelet/pods/879cd617-5321-4a0d-b131-eb344a4a1fbe/volumes" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.035 [INFO][4838] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.036 [INFO][4838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" iface="eth0" netns="/var/run/netns/cni-01c792ea-a719-2ae0-d093-f4447d4b08e8" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.037 [INFO][4838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" iface="eth0" netns="/var/run/netns/cni-01c792ea-a719-2ae0-d093-f4447d4b08e8" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.037 [INFO][4838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" iface="eth0" netns="/var/run/netns/cni-01c792ea-a719-2ae0-d093-f4447d4b08e8" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.037 [INFO][4838] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.038 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.077 [INFO][4865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.077 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.077 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.083 [WARNING][4865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.083 [INFO][4865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.085 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:34.089550 containerd[1703]: 2025-09-12 17:40:34.086 [INFO][4838] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:34.092428 containerd[1703]: time="2025-09-12T17:40:34.089761995Z" level=info msg="TearDown network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" successfully" Sep 12 17:40:34.092428 containerd[1703]: time="2025-09-12T17:40:34.089800097Z" level=info msg="StopPodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" returns successfully" Sep 12 17:40:34.098089 containerd[1703]: time="2025-09-12T17:40:34.098045401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-h9b8d,Uid:a75fb64e-a5f0-43ca-beae-d16320e589c8,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:34.099056 systemd[1]: run-netns-cni\x2d01c792ea\x2da719\x2d2ae0\x2dd093\x2df4447d4b08e8.mount: Deactivated successfully. Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.040 [INFO][4843] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.040 [INFO][4843] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" iface="eth0" netns="/var/run/netns/cni-dbe7f5c5-6a5d-2314-608c-de01ac642ad6" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.041 [INFO][4843] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" iface="eth0" netns="/var/run/netns/cni-dbe7f5c5-6a5d-2314-608c-de01ac642ad6" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.043 [INFO][4843] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" iface="eth0" netns="/var/run/netns/cni-dbe7f5c5-6a5d-2314-608c-de01ac642ad6" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.043 [INFO][4843] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.043 [INFO][4843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.078 [INFO][4867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.079 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.085 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.095 [WARNING][4867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.096 [INFO][4867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.097 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:34.102036 containerd[1703]: 2025-09-12 17:40:34.099 [INFO][4843] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:34.102036 containerd[1703]: time="2025-09-12T17:40:34.101818532Z" level=info msg="TearDown network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" successfully" Sep 12 17:40:34.102036 containerd[1703]: time="2025-09-12T17:40:34.101859535Z" level=info msg="StopPodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" returns successfully" Sep 12 17:40:34.105809 containerd[1703]: time="2025-09-12T17:40:34.102705687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfb88c844-q2vh8,Uid:66fb902f-f57e-4463-81fb-6c863e23cadb,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:34.107281 systemd[1]: run-netns-cni\x2ddbe7f5c5\x2d6a5d\x2d2314\x2d608c\x2dde01ac642ad6.mount: Deactivated successfully. Sep 12 17:40:34.401557 systemd-networkd[1594]: cali52c8b3a3674: Link UP Sep 12 17:40:34.404554 systemd-networkd[1594]: cali52c8b3a3674: Gained carrier Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.243 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0 calico-kube-controllers-5bfb88c844- calico-system 66fb902f-f57e-4463-81fb-6c863e23cadb 968 0 2025-09-12 17:40:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bfb88c844 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d calico-kube-controllers-5bfb88c844-q2vh8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali52c8b3a3674 [] [] }} ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.243 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.324 [INFO][4905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" HandleID="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.324 [INFO][4905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" HandleID="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"calico-kube-controllers-5bfb88c844-q2vh8", "timestamp":"2025-09-12 17:40:34.324289843 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.324 [INFO][4905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.324 [INFO][4905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.324 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.344 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.359 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.366 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.368 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.372 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.373 [INFO][4905] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.375 [INFO][4905] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.381 [INFO][4905] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.390 [INFO][4905] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.66/26] block=192.168.126.64/26 handle="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.390 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.66/26] handle="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.390 [INFO][4905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:34.430844 containerd[1703]: 2025-09-12 17:40:34.390 [INFO][4905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.66/26] IPv6=[] ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" HandleID="k8s-pod-network.dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.432642 containerd[1703]: 2025-09-12 17:40:34.394 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0", GenerateName:"calico-kube-controllers-5bfb88c844-", Namespace:"calico-system", SelfLink:"", UID:"66fb902f-f57e-4463-81fb-6c863e23cadb", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfb88c844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"calico-kube-controllers-5bfb88c844-q2vh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52c8b3a3674", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:34.432642 containerd[1703]: 2025-09-12 17:40:34.394 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.66/32] ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.432642 containerd[1703]: 2025-09-12 17:40:34.394 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52c8b3a3674 ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.432642 containerd[1703]: 2025-09-12 17:40:34.402 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.432642 containerd[1703]: 2025-09-12 17:40:34.403 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0", GenerateName:"calico-kube-controllers-5bfb88c844-", Namespace:"calico-system", SelfLink:"", UID:"66fb902f-f57e-4463-81fb-6c863e23cadb", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfb88c844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd", Pod:"calico-kube-controllers-5bfb88c844-q2vh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52c8b3a3674", MAC:"ee:26:a5:6f:fc:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:34.432642 containerd[1703]: 2025-09-12 17:40:34.427 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd" Namespace="calico-system" Pod="calico-kube-controllers-5bfb88c844-q2vh8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:34.508768 containerd[1703]: time="2025-09-12T17:40:34.507620060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:34.508768 containerd[1703]: time="2025-09-12T17:40:34.507807671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:34.508768 containerd[1703]: time="2025-09-12T17:40:34.507914878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:34.508768 containerd[1703]: time="2025-09-12T17:40:34.508381606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:34.568185 systemd-networkd[1594]: calic1427382d7f: Link UP Sep 12 17:40:34.570243 systemd-networkd[1594]: calic1427382d7f: Gained carrier Sep 12 17:40:34.576925 systemd[1]: Started cri-containerd-dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd.scope - libcontainer container dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd. Sep 12 17:40:34.606915 systemd-networkd[1594]: cali7a8c57afe34: Gained IPv6LL Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.266 [INFO][4887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0 calico-apiserver-7f8c6cc887- calico-apiserver a75fb64e-a5f0-43ca-beae-d16320e589c8 966 0 2025-09-12 17:40:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f8c6cc887 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d calico-apiserver-7f8c6cc887-h9b8d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1427382d7f [] [] }} ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.266 [INFO][4887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.356 [INFO][4912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.357 [INFO][4912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"calico-apiserver-7f8c6cc887-h9b8d", "timestamp":"2025-09-12 17:40:34.355685064 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.357 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.391 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.391 [INFO][4912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.445 [INFO][4912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.465 [INFO][4912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.473 [INFO][4912] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.477 [INFO][4912] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.488 [INFO][4912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.488 [INFO][4912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.493 [INFO][4912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79 Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.507 [INFO][4912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.523 [INFO][4912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.67/26] block=192.168.126.64/26 handle="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.523 [INFO][4912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.67/26] handle="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.523 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:34.633158 containerd[1703]: 2025-09-12 17:40:34.523 [INFO][4912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.67/26] IPv6=[] ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.634482 containerd[1703]: 2025-09-12 17:40:34.536 [INFO][4887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"a75fb64e-a5f0-43ca-beae-d16320e589c8", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"calico-apiserver-7f8c6cc887-h9b8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1427382d7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:34.634482 containerd[1703]: 2025-09-12 17:40:34.537 [INFO][4887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.67/32] ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.634482 containerd[1703]: 2025-09-12 17:40:34.537 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1427382d7f ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.634482 containerd[1703]: 2025-09-12 17:40:34.575 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.634482 containerd[1703]: 2025-09-12 17:40:34.586 [INFO][4887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"a75fb64e-a5f0-43ca-beae-d16320e589c8", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79", Pod:"calico-apiserver-7f8c6cc887-h9b8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1427382d7f", MAC:"ce:d9:9c:6a:27:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:34.634482 containerd[1703]: 2025-09-12 17:40:34.615 [INFO][4887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-h9b8d" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:34.709450 containerd[1703]: time="2025-09-12T17:40:34.709042308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:34.709450 containerd[1703]: time="2025-09-12T17:40:34.709109512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:34.709450 containerd[1703]: time="2025-09-12T17:40:34.709128113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:34.709450 containerd[1703]: time="2025-09-12T17:40:34.709228520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:34.776364 systemd[1]: Started cri-containerd-4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79.scope - libcontainer container 4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79. Sep 12 17:40:34.831713 containerd[1703]: time="2025-09-12T17:40:34.831672952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfb88c844-q2vh8,Uid:66fb902f-f57e-4463-81fb-6c863e23cadb,Namespace:calico-system,Attempt:1,} returns sandbox id \"dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd\"" Sep 12 17:40:34.881482 containerd[1703]: time="2025-09-12T17:40:34.881378213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-h9b8d,Uid:a75fb64e-a5f0-43ca-beae-d16320e589c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\"" Sep 12 17:40:34.894784 containerd[1703]: time="2025-09-12T17:40:34.894722988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:34.896807 containerd[1703]: time="2025-09-12T17:40:34.896753321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:40:34.899780 containerd[1703]: time="2025-09-12T17:40:34.899602808Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:34.904373 containerd[1703]: time="2025-09-12T17:40:34.903598970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:34.904373 containerd[1703]: time="2025-09-12T17:40:34.904248113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.05002801s" Sep 12 17:40:34.904373 containerd[1703]: time="2025-09-12T17:40:34.904283415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:40:34.907078 containerd[1703]: time="2025-09-12T17:40:34.907044096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:40:34.907907 containerd[1703]: time="2025-09-12T17:40:34.907875151Z" level=info msg="CreateContainer within sandbox \"1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:40:34.932591 containerd[1703]: time="2025-09-12T17:40:34.932529768Z" level=info msg="StopPodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\"" Sep 12 17:40:34.950648 containerd[1703]: time="2025-09-12T17:40:34.949902008Z" level=info msg="CreateContainer within sandbox \"1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3e7b27962a4b61a2cfc2a703184c91c96c81a9b95d864c28ffd13b4c49096fb5\"" Sep 12 17:40:34.951429 containerd[1703]: time="2025-09-12T17:40:34.951394406Z" level=info msg="StartContainer for \"3e7b27962a4b61a2cfc2a703184c91c96c81a9b95d864c28ffd13b4c49096fb5\"" Sep 12 17:40:34.991950 systemd[1]: Started cri-containerd-3e7b27962a4b61a2cfc2a703184c91c96c81a9b95d864c28ffd13b4c49096fb5.scope - libcontainer container 3e7b27962a4b61a2cfc2a703184c91c96c81a9b95d864c28ffd13b4c49096fb5. Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.000 [INFO][5049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.000 [INFO][5049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" iface="eth0" netns="/var/run/netns/cni-731e010f-098b-972e-09c1-b2e656954f95" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.001 [INFO][5049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" iface="eth0" netns="/var/run/netns/cni-731e010f-098b-972e-09c1-b2e656954f95" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.001 [INFO][5049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" iface="eth0" netns="/var/run/netns/cni-731e010f-098b-972e-09c1-b2e656954f95" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.002 [INFO][5049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.002 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.046 [INFO][5077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.047 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.047 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.054 [WARNING][5077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.054 [INFO][5077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.056 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:35.060165 containerd[1703]: 2025-09-12 17:40:35.057 [INFO][5049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:35.063766 containerd[1703]: time="2025-09-12T17:40:35.061122704Z" level=info msg="TearDown network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" successfully" Sep 12 17:40:35.063766 containerd[1703]: time="2025-09-12T17:40:35.062052365Z" level=info msg="StopPodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" returns successfully" Sep 12 17:40:35.066020 containerd[1703]: time="2025-09-12T17:40:35.065964721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67d27,Uid:21f78540-0615-49df-a575-7f7ff1c93e62,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:35.079876 containerd[1703]: time="2025-09-12T17:40:35.079816330Z" level=info msg="StartContainer for \"3e7b27962a4b61a2cfc2a703184c91c96c81a9b95d864c28ffd13b4c49096fb5\" returns successfully" Sep 12 17:40:35.098344 systemd[1]: run-netns-cni\x2d731e010f\x2d098b\x2d972e\x2d09c1\x2db2e656954f95.mount: Deactivated successfully. Sep 12 17:40:35.213611 systemd-networkd[1594]: cali839037d15d1: Link UP Sep 12 17:40:35.215689 systemd-networkd[1594]: cali839037d15d1: Gained carrier Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.139 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0 coredns-668d6bf9bc- kube-system 21f78540-0615-49df-a575-7f7ff1c93e62 980 0 2025-09-12 17:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d coredns-668d6bf9bc-67d27 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali839037d15d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.139 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.167 [INFO][5114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" HandleID="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.167 [INFO][5114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" HandleID="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"coredns-668d6bf9bc-67d27", "timestamp":"2025-09-12 17:40:35.167494382 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.167 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.167 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.167 [INFO][5114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.179 [INFO][5114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.183 [INFO][5114] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.188 [INFO][5114] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.189 [INFO][5114] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.191 [INFO][5114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.191 [INFO][5114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.193 [INFO][5114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.200 [INFO][5114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.207 [INFO][5114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.68/26] block=192.168.126.64/26 handle="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.207 [INFO][5114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.68/26] handle="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.207 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:35.237822 containerd[1703]: 2025-09-12 17:40:35.207 [INFO][5114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.68/26] IPv6=[] ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" HandleID="k8s-pod-network.f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.240024 containerd[1703]: 2025-09-12 17:40:35.209 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21f78540-0615-49df-a575-7f7ff1c93e62", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"coredns-668d6bf9bc-67d27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali839037d15d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:35.240024 containerd[1703]: 2025-09-12 17:40:35.209 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.68/32] ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.240024 containerd[1703]: 2025-09-12 17:40:35.209 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali839037d15d1 ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.240024 containerd[1703]: 2025-09-12 17:40:35.216 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.240024 containerd[1703]: 2025-09-12 17:40:35.216 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21f78540-0615-49df-a575-7f7ff1c93e62", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f", Pod:"coredns-668d6bf9bc-67d27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali839037d15d1", MAC:"9a:52:1a:8d:5d:54", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:35.240024 containerd[1703]: 2025-09-12 17:40:35.232 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f" Namespace="kube-system" Pod="coredns-668d6bf9bc-67d27" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:35.272849 containerd[1703]: time="2025-09-12T17:40:35.272581575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:35.272849 containerd[1703]: time="2025-09-12T17:40:35.272634179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:35.272849 containerd[1703]: time="2025-09-12T17:40:35.272648480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:35.272849 containerd[1703]: time="2025-09-12T17:40:35.272717384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:35.307914 systemd[1]: Started cri-containerd-f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f.scope - libcontainer container f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f. Sep 12 17:40:35.353335 containerd[1703]: time="2025-09-12T17:40:35.353302270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67d27,Uid:21f78540-0615-49df-a575-7f7ff1c93e62,Namespace:kube-system,Attempt:1,} returns sandbox id \"f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f\"" Sep 12 17:40:35.356729 containerd[1703]: time="2025-09-12T17:40:35.356431276Z" level=info msg="CreateContainer within sandbox \"f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:35.390722 containerd[1703]: time="2025-09-12T17:40:35.390680722Z" level=info msg="CreateContainer within sandbox \"f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4f8263ace8afbc31b88089ce447f761be9028c309e7b8776203dfd0ce36a226\"" Sep 12 17:40:35.391403 containerd[1703]: time="2025-09-12T17:40:35.391285962Z" level=info msg="StartContainer for \"b4f8263ace8afbc31b88089ce447f761be9028c309e7b8776203dfd0ce36a226\"" Sep 12 17:40:35.417912 systemd[1]: Started cri-containerd-b4f8263ace8afbc31b88089ce447f761be9028c309e7b8776203dfd0ce36a226.scope - libcontainer container b4f8263ace8afbc31b88089ce447f761be9028c309e7b8776203dfd0ce36a226. Sep 12 17:40:35.447050 containerd[1703]: time="2025-09-12T17:40:35.447006117Z" level=info msg="StartContainer for \"b4f8263ace8afbc31b88089ce447f761be9028c309e7b8776203dfd0ce36a226\" returns successfully" Sep 12 17:40:35.567884 systemd-networkd[1594]: vxlan.calico: Gained IPv6LL Sep 12 17:40:35.933370 containerd[1703]: time="2025-09-12T17:40:35.933261615Z" level=info msg="StopPodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\"" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:35.982 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:35.983 [INFO][5216] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" iface="eth0" netns="/var/run/netns/cni-f084a94f-ad58-2caf-428e-c429a6919b23" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:35.983 [INFO][5216] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" iface="eth0" netns="/var/run/netns/cni-f084a94f-ad58-2caf-428e-c429a6919b23" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:35.983 [INFO][5216] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" iface="eth0" netns="/var/run/netns/cni-f084a94f-ad58-2caf-428e-c429a6919b23" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:35.983 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:35.983 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.008 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.008 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.008 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.013 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.013 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.015 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:36.017431 containerd[1703]: 2025-09-12 17:40:36.016 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:36.018826 containerd[1703]: time="2025-09-12T17:40:36.017649451Z" level=info msg="TearDown network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" successfully" Sep 12 17:40:36.018826 containerd[1703]: time="2025-09-12T17:40:36.017683653Z" level=info msg="StopPodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" returns successfully" Sep 12 17:40:36.018826 containerd[1703]: time="2025-09-12T17:40:36.018644216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4nvw,Uid:094c632a-fdc9-41e5-8058-f2de8effbf33,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:36.098248 systemd[1]: run-netns-cni\x2df084a94f\x2dad58\x2d2caf\x2d428e\x2dc429a6919b23.mount: Deactivated successfully. Sep 12 17:40:36.185157 systemd-networkd[1594]: calic199875bcdc: Link UP Sep 12 17:40:36.189608 systemd-networkd[1594]: calic199875bcdc: Gained carrier Sep 12 17:40:36.218080 kubelet[3224]: I0912 17:40:36.217946 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-67d27" podStartSLOduration=46.217924689 podStartE2EDuration="46.217924689s" podCreationTimestamp="2025-09-12 17:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:36.21763907 +0000 UTC m=+50.399765825" watchObservedRunningTime="2025-09-12 17:40:36.217924689 +0000 UTC m=+50.400051444" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.081 [INFO][5231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0 csi-node-driver- calico-system 094c632a-fdc9-41e5-8058-f2de8effbf33 992 0 2025-09-12 17:40:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d csi-node-driver-k4nvw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic199875bcdc [] [] }} ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.081 [INFO][5231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.132 [INFO][5243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" HandleID="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.132 [INFO][5243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" HandleID="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"csi-node-driver-k4nvw", "timestamp":"2025-09-12 17:40:36.13258319 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.133 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.133 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.133 [INFO][5243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.142 [INFO][5243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.147 [INFO][5243] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.150 [INFO][5243] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.152 [INFO][5243] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.154 [INFO][5243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.154 [INFO][5243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.155 [INFO][5243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767 Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.160 [INFO][5243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.169 [INFO][5243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.69/26] block=192.168.126.64/26 handle="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.169 [INFO][5243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.69/26] handle="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.169 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:36.223479 containerd[1703]: 2025-09-12 17:40:36.169 [INFO][5243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.69/26] IPv6=[] ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" HandleID="k8s-pod-network.1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.226371 containerd[1703]: 2025-09-12 17:40:36.173 [INFO][5231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"094c632a-fdc9-41e5-8058-f2de8effbf33", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"csi-node-driver-k4nvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic199875bcdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:36.226371 containerd[1703]: 2025-09-12 17:40:36.173 [INFO][5231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.69/32] ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.226371 containerd[1703]: 2025-09-12 17:40:36.173 [INFO][5231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic199875bcdc ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.226371 containerd[1703]: 2025-09-12 17:40:36.189 [INFO][5231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.226371 containerd[1703]: 2025-09-12 17:40:36.192 [INFO][5231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"094c632a-fdc9-41e5-8058-f2de8effbf33", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767", Pod:"csi-node-driver-k4nvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic199875bcdc", MAC:"66:3b:da:7a:77:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:36.226371 containerd[1703]: 2025-09-12 17:40:36.217 [INFO][5231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767" Namespace="calico-system" Pod="csi-node-driver-k4nvw" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:36.271031 systemd-networkd[1594]: calic1427382d7f: Gained IPv6LL Sep 12 17:40:36.290454 containerd[1703]: time="2025-09-12T17:40:36.290344139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:36.290761 containerd[1703]: time="2025-09-12T17:40:36.290649759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:36.290858 containerd[1703]: time="2025-09-12T17:40:36.290726864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:36.291074 containerd[1703]: time="2025-09-12T17:40:36.291013883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:36.348234 systemd[1]: Started cri-containerd-1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767.scope - libcontainer container 1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767. Sep 12 17:40:36.391818 containerd[1703]: time="2025-09-12T17:40:36.391771493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k4nvw,Uid:094c632a-fdc9-41e5-8058-f2de8effbf33,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767\"" Sep 12 17:40:36.399276 systemd-networkd[1594]: cali52c8b3a3674: Gained IPv6LL Sep 12 17:40:36.933090 containerd[1703]: time="2025-09-12T17:40:36.933017298Z" level=info msg="StopPodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\"" Sep 12 17:40:36.934310 containerd[1703]: time="2025-09-12T17:40:36.933017398Z" level=info msg="StopPodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\"" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.009 [INFO][5323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.011 [INFO][5323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" iface="eth0" netns="/var/run/netns/cni-8a0d2851-d3c1-4aba-5360-cff3713b4aac" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.011 [INFO][5323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" iface="eth0" netns="/var/run/netns/cni-8a0d2851-d3c1-4aba-5360-cff3713b4aac" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.012 [INFO][5323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" iface="eth0" netns="/var/run/netns/cni-8a0d2851-d3c1-4aba-5360-cff3713b4aac" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.012 [INFO][5323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.014 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.049 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.049 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.050 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.057 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.057 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.059 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:37.064789 containerd[1703]: 2025-09-12 17:40:37.062 [INFO][5323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:37.070839 containerd[1703]: time="2025-09-12T17:40:37.068482084Z" level=info msg="TearDown network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" successfully" Sep 12 17:40:37.070839 containerd[1703]: time="2025-09-12T17:40:37.069102525Z" level=info msg="StopPodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" returns successfully" Sep 12 17:40:37.070245 systemd[1]: run-netns-cni\x2d8a0d2851\x2dd3c1\x2d4aba\x2d5360\x2dcff3713b4aac.mount: Deactivated successfully. Sep 12 17:40:37.074001 containerd[1703]: time="2025-09-12T17:40:37.073499613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb6c89c9-wrztx,Uid:4e55732e-c069-4255-903c-d2d265e35b14,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.013 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.014 [INFO][5322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" iface="eth0" netns="/var/run/netns/cni-219d9225-67cc-6a7e-55d4-877c5e6001d7" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.015 [INFO][5322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" iface="eth0" netns="/var/run/netns/cni-219d9225-67cc-6a7e-55d4-877c5e6001d7" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.015 [INFO][5322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" iface="eth0" netns="/var/run/netns/cni-219d9225-67cc-6a7e-55d4-877c5e6001d7" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.015 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.015 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.057 [INFO][5338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.058 [INFO][5338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.059 [INFO][5338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.069 [WARNING][5338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.069 [INFO][5338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.073 [INFO][5338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:37.076998 containerd[1703]: 2025-09-12 17:40:37.075 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:37.080048 containerd[1703]: time="2025-09-12T17:40:37.079486706Z" level=info msg="TearDown network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" successfully" Sep 12 17:40:37.080048 containerd[1703]: time="2025-09-12T17:40:37.079521408Z" level=info msg="StopPodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" returns successfully" Sep 12 17:40:37.082306 containerd[1703]: time="2025-09-12T17:40:37.081954168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-x9kbt,Uid:78d725d5-0368-4c83-a47c-f7b5ec0c5f89,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:37.082611 systemd[1]: run-netns-cni\x2d219d9225\x2d67cc\x2d6a7e\x2d55d4\x2d877c5e6001d7.mount: Deactivated successfully. Sep 12 17:40:37.103093 systemd-networkd[1594]: cali839037d15d1: Gained IPv6LL Sep 12 17:40:37.708183 systemd-networkd[1594]: calid05d69cdaa3: Link UP Sep 12 17:40:37.708477 systemd-networkd[1594]: calid05d69cdaa3: Gained carrier Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.603 [INFO][5365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0 calico-apiserver-7f8c6cc887- calico-apiserver 78d725d5-0368-4c83-a47c-f7b5ec0c5f89 1009 0 2025-09-12 17:40:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f8c6cc887 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d calico-apiserver-7f8c6cc887-x9kbt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid05d69cdaa3 [] [] }} ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.604 [INFO][5365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.651 [INFO][5380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.651 [INFO][5380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"calico-apiserver-7f8c6cc887-x9kbt", "timestamp":"2025-09-12 17:40:37.651100703 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.651 [INFO][5380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.651 [INFO][5380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.651 [INFO][5380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.660 [INFO][5380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.667 [INFO][5380] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.673 [INFO][5380] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.675 [INFO][5380] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.678 [INFO][5380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.679 [INFO][5380] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.680 [INFO][5380] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648 Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.687 [INFO][5380] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.698 [INFO][5380] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.70/26] block=192.168.126.64/26 handle="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.698 [INFO][5380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.70/26] handle="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.698 [INFO][5380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:37.731028 containerd[1703]: 2025-09-12 17:40:37.698 [INFO][5380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.70/26] IPv6=[] ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.732088 containerd[1703]: 2025-09-12 17:40:37.702 [INFO][5365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"78d725d5-0368-4c83-a47c-f7b5ec0c5f89", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"calico-apiserver-7f8c6cc887-x9kbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05d69cdaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:37.732088 containerd[1703]: 2025-09-12 17:40:37.702 [INFO][5365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.70/32] ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.732088 containerd[1703]: 2025-09-12 17:40:37.702 [INFO][5365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid05d69cdaa3 ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.732088 containerd[1703]: 2025-09-12 17:40:37.706 [INFO][5365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.732088 containerd[1703]: 2025-09-12 17:40:37.707 [INFO][5365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"78d725d5-0368-4c83-a47c-f7b5ec0c5f89", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648", Pod:"calico-apiserver-7f8c6cc887-x9kbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05d69cdaa3", MAC:"4a:5b:72:6c:a2:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:37.732088 containerd[1703]: 2025-09-12 17:40:37.728 [INFO][5365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Namespace="calico-apiserver" Pod="calico-apiserver-7f8c6cc887-x9kbt" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:37.794706 containerd[1703]: time="2025-09-12T17:40:37.793772162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:37.794706 containerd[1703]: time="2025-09-12T17:40:37.793832966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:37.794706 containerd[1703]: time="2025-09-12T17:40:37.794578215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:37.795604 containerd[1703]: time="2025-09-12T17:40:37.795474174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:37.839476 systemd-networkd[1594]: cali4fc85137f7b: Link UP Sep 12 17:40:37.843722 systemd-networkd[1594]: cali4fc85137f7b: Gained carrier Sep 12 17:40:37.860951 systemd[1]: Started cri-containerd-f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648.scope - libcontainer container f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648. Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.605 [INFO][5355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0 calico-apiserver-fcb6c89c9- calico-apiserver 4e55732e-c069-4255-903c-d2d265e35b14 1008 0 2025-09-12 17:40:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fcb6c89c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d calico-apiserver-fcb6c89c9-wrztx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4fc85137f7b [] [] }} ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.606 [INFO][5355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.681 [INFO][5385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" HandleID="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.681 [INFO][5385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" HandleID="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003321e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"calico-apiserver-fcb6c89c9-wrztx", "timestamp":"2025-09-12 17:40:37.681624905 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.681 [INFO][5385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.698 [INFO][5385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.699 [INFO][5385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.761 [INFO][5385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.772 [INFO][5385] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.777 [INFO][5385] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.780 [INFO][5385] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.784 [INFO][5385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.785 [INFO][5385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.790 [INFO][5385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.805 [INFO][5385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.829 [INFO][5385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.71/26] block=192.168.126.64/26 handle="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.829 [INFO][5385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.71/26] handle="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.829 [INFO][5385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:37.873136 containerd[1703]: 2025-09-12 17:40:37.829 [INFO][5385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.71/26] IPv6=[] ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" HandleID="k8s-pod-network.a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.874592 containerd[1703]: 2025-09-12 17:40:37.833 [INFO][5355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0", GenerateName:"calico-apiserver-fcb6c89c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e55732e-c069-4255-903c-d2d265e35b14", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb6c89c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"calico-apiserver-fcb6c89c9-wrztx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fc85137f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:37.874592 containerd[1703]: 2025-09-12 17:40:37.834 [INFO][5355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.71/32] ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.874592 containerd[1703]: 2025-09-12 17:40:37.835 [INFO][5355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fc85137f7b ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.874592 containerd[1703]: 2025-09-12 17:40:37.839 [INFO][5355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.874592 containerd[1703]: 2025-09-12 17:40:37.839 [INFO][5355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0", GenerateName:"calico-apiserver-fcb6c89c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e55732e-c069-4255-903c-d2d265e35b14", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb6c89c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c", Pod:"calico-apiserver-fcb6c89c9-wrztx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fc85137f7b", MAC:"7a:b4:4d:93:cb:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:37.874592 containerd[1703]: 2025-09-12 17:40:37.865 [INFO][5355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-wrztx" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:37.938399 containerd[1703]: time="2025-09-12T17:40:37.937362081Z" level=info msg="StopPodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\"" Sep 12 17:40:37.941540 containerd[1703]: time="2025-09-12T17:40:37.938495956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:37.941768 containerd[1703]: time="2025-09-12T17:40:37.938788975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:37.941768 containerd[1703]: time="2025-09-12T17:40:37.939893347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:37.941768 containerd[1703]: time="2025-09-12T17:40:37.940009955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:37.993372 systemd[1]: Started cri-containerd-a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c.scope - libcontainer container a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c. Sep 12 17:40:38.060445 containerd[1703]: time="2025-09-12T17:40:38.060246643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8c6cc887-x9kbt,Uid:78d725d5-0368-4c83-a47c-f7b5ec0c5f89,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\"" Sep 12 17:40:38.063418 systemd-networkd[1594]: calic199875bcdc: Gained IPv6LL Sep 12 17:40:38.223772 containerd[1703]: time="2025-09-12T17:40:38.222655696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb6c89c9-wrztx,Uid:4e55732e-c069-4255-903c-d2d265e35b14,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c\"" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.097 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.098 [INFO][5476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" iface="eth0" netns="/var/run/netns/cni-12971f69-bf17-b5ab-482d-17c991eda3b8" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.101 [INFO][5476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" iface="eth0" netns="/var/run/netns/cni-12971f69-bf17-b5ab-482d-17c991eda3b8" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.105 [INFO][5476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" iface="eth0" netns="/var/run/netns/cni-12971f69-bf17-b5ab-482d-17c991eda3b8" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.105 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.105 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.227 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.232 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.232 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.252 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.252 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.255 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:38.262336 containerd[1703]: 2025-09-12 17:40:38.259 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:38.264580 containerd[1703]: time="2025-09-12T17:40:38.264535944Z" level=info msg="TearDown network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" successfully" Sep 12 17:40:38.264580 containerd[1703]: time="2025-09-12T17:40:38.264576446Z" level=info msg="StopPodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" returns successfully" Sep 12 17:40:38.266042 containerd[1703]: time="2025-09-12T17:40:38.266008540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-njvm2,Uid:63bab1f9-13e1-4261-8028-902eba3c8e04,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:38.496030 systemd[1]: run-netns-cni\x2d12971f69\x2dbf17\x2db5ab\x2d482d\x2d17c991eda3b8.mount: Deactivated successfully. Sep 12 17:40:38.507063 systemd-networkd[1594]: calic37f8aa3407: Link UP Sep 12 17:40:38.507311 systemd-networkd[1594]: calic37f8aa3407: Gained carrier Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.400 [INFO][5517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0 coredns-668d6bf9bc- kube-system 63bab1f9-13e1-4261-8028-902eba3c8e04 1022 0 2025-09-12 17:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d coredns-668d6bf9bc-njvm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic37f8aa3407 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.401 [INFO][5517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.437 [INFO][5528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" HandleID="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.438 [INFO][5528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" HandleID="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"coredns-668d6bf9bc-njvm2", "timestamp":"2025-09-12 17:40:38.437919417 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.438 [INFO][5528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.438 [INFO][5528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.438 [INFO][5528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.447 [INFO][5528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.453 [INFO][5528] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.461 [INFO][5528] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.464 [INFO][5528] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.467 [INFO][5528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.468 [INFO][5528] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.469 [INFO][5528] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34 Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.476 [INFO][5528] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.492 [INFO][5528] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.72/26] block=192.168.126.64/26 handle="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.492 [INFO][5528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.72/26] handle="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.492 [INFO][5528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:38.531291 containerd[1703]: 2025-09-12 17:40:38.492 [INFO][5528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.72/26] IPv6=[] ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" HandleID="k8s-pod-network.7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.532417 containerd[1703]: 2025-09-12 17:40:38.501 [INFO][5517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"63bab1f9-13e1-4261-8028-902eba3c8e04", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"coredns-668d6bf9bc-njvm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic37f8aa3407", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:38.532417 containerd[1703]: 2025-09-12 17:40:38.501 [INFO][5517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.72/32] ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.532417 containerd[1703]: 2025-09-12 17:40:38.501 [INFO][5517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic37f8aa3407 ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.532417 containerd[1703]: 2025-09-12 17:40:38.505 [INFO][5517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.532417 containerd[1703]: 2025-09-12 17:40:38.506 [INFO][5517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"63bab1f9-13e1-4261-8028-902eba3c8e04", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34", Pod:"coredns-668d6bf9bc-njvm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic37f8aa3407", MAC:"ea:e1:bf:1c:45:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:38.532417 containerd[1703]: 2025-09-12 17:40:38.524 [INFO][5517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-njvm2" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:38.577784 containerd[1703]: time="2025-09-12T17:40:38.575356933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:38.577784 containerd[1703]: time="2025-09-12T17:40:38.575413937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:38.577784 containerd[1703]: time="2025-09-12T17:40:38.575456140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:38.583718 containerd[1703]: time="2025-09-12T17:40:38.579786824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:38.628932 systemd[1]: Started cri-containerd-7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34.scope - libcontainer container 7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34. Sep 12 17:40:38.695992 containerd[1703]: time="2025-09-12T17:40:38.695935343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-njvm2,Uid:63bab1f9-13e1-4261-8028-902eba3c8e04,Namespace:kube-system,Attempt:1,} returns sandbox id \"7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34\"" Sep 12 17:40:38.702113 containerd[1703]: time="2025-09-12T17:40:38.701959538Z" level=info msg="CreateContainer within sandbox \"7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:38.739006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445311835.mount: Deactivated successfully. Sep 12 17:40:38.751626 containerd[1703]: time="2025-09-12T17:40:38.751581193Z" level=info msg="CreateContainer within sandbox \"7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f1f907212a1bab28e354dedea12cc97eb0a6f4f2704f72a5944f7b7c4043d75\"" Sep 12 17:40:38.753882 containerd[1703]: time="2025-09-12T17:40:38.753822540Z" level=info msg="StartContainer for \"2f1f907212a1bab28e354dedea12cc97eb0a6f4f2704f72a5944f7b7c4043d75\"" Sep 12 17:40:38.827944 systemd[1]: Started cri-containerd-2f1f907212a1bab28e354dedea12cc97eb0a6f4f2704f72a5944f7b7c4043d75.scope - libcontainer container 2f1f907212a1bab28e354dedea12cc97eb0a6f4f2704f72a5944f7b7c4043d75. Sep 12 17:40:38.894998 systemd-networkd[1594]: calid05d69cdaa3: Gained IPv6LL Sep 12 17:40:38.929516 containerd[1703]: time="2025-09-12T17:40:38.929470663Z" level=info msg="StartContainer for \"2f1f907212a1bab28e354dedea12cc97eb0a6f4f2704f72a5944f7b7c4043d75\" returns successfully" Sep 12 17:40:38.932715 containerd[1703]: time="2025-09-12T17:40:38.932679873Z" level=info msg="StopPodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\"" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.019 [INFO][5637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.020 [INFO][5637] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" iface="eth0" netns="/var/run/netns/cni-aa6dc371-6516-6398-c5b1-65b507f9a360" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.021 [INFO][5637] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" iface="eth0" netns="/var/run/netns/cni-aa6dc371-6516-6398-c5b1-65b507f9a360" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.022 [INFO][5637] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" iface="eth0" netns="/var/run/netns/cni-aa6dc371-6516-6398-c5b1-65b507f9a360" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.022 [INFO][5637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.022 [INFO][5637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.074 [INFO][5647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.074 [INFO][5647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.074 [INFO][5647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.086 [WARNING][5647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.086 [INFO][5647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.090 [INFO][5647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:39.100901 containerd[1703]: 2025-09-12 17:40:39.095 [INFO][5637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:39.101483 containerd[1703]: time="2025-09-12T17:40:39.101047518Z" level=info msg="TearDown network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" successfully" Sep 12 17:40:39.101483 containerd[1703]: time="2025-09-12T17:40:39.101083720Z" level=info msg="StopPodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" returns successfully" Sep 12 17:40:39.102936 containerd[1703]: time="2025-09-12T17:40:39.102904240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hhnb5,Uid:46635db1-496e-4289-8758-6f4e732d4253,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:39.232117 kubelet[3224]: I0912 17:40:39.231122 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-njvm2" podStartSLOduration=49.231098449 podStartE2EDuration="49.231098449s" podCreationTimestamp="2025-09-12 17:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:39.229150221 +0000 UTC m=+53.411276976" watchObservedRunningTime="2025-09-12 17:40:39.231098449 +0000 UTC m=+53.413225204" Sep 12 17:40:39.358504 systemd-networkd[1594]: cali52299064f0a: Link UP Sep 12 17:40:39.359377 systemd-networkd[1594]: cali52299064f0a: Gained carrier Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.184 [INFO][5654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0 goldmane-54d579b49d- calico-system 46635db1-496e-4289-8758-6f4e732d4253 1031 0 2025-09-12 17:40:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d goldmane-54d579b49d-hhnb5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali52299064f0a [] [] }} ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.184 [INFO][5654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.248 [INFO][5665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" HandleID="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.248 [INFO][5665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" HandleID="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"goldmane-54d579b49d-hhnb5", "timestamp":"2025-09-12 17:40:39.248421085 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.249 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.249 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.249 [INFO][5665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.270 [INFO][5665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.304 [INFO][5665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.313 [INFO][5665] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.316 [INFO][5665] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.318 [INFO][5665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.319 [INFO][5665] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.320 [INFO][5665] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5 Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.328 [INFO][5665] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.348 [INFO][5665] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.73/26] block=192.168.126.64/26 handle="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.348 [INFO][5665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.73/26] handle="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.349 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:39.392471 containerd[1703]: 2025-09-12 17:40:39.349 [INFO][5665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.73/26] IPv6=[] ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" HandleID="k8s-pod-network.0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.394562 containerd[1703]: 2025-09-12 17:40:39.352 [INFO][5654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"46635db1-496e-4289-8758-6f4e732d4253", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"goldmane-54d579b49d-hhnb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali52299064f0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:39.394562 containerd[1703]: 2025-09-12 17:40:39.352 [INFO][5654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.73/32] ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.394562 containerd[1703]: 2025-09-12 17:40:39.352 [INFO][5654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52299064f0a ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.394562 containerd[1703]: 2025-09-12 17:40:39.360 [INFO][5654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.394562 containerd[1703]: 2025-09-12 17:40:39.364 [INFO][5654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"46635db1-496e-4289-8758-6f4e732d4253", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5", Pod:"goldmane-54d579b49d-hhnb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali52299064f0a", MAC:"12:3d:ce:77:a0:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:39.394562 containerd[1703]: 2025-09-12 17:40:39.386 [INFO][5654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5" Namespace="calico-system" Pod="goldmane-54d579b49d-hhnb5" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:39.410040 systemd-networkd[1594]: cali4fc85137f7b: Gained IPv6LL Sep 12 17:40:39.450209 containerd[1703]: time="2025-09-12T17:40:39.448388803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:39.450209 containerd[1703]: time="2025-09-12T17:40:39.448452807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:39.450209 containerd[1703]: time="2025-09-12T17:40:39.448483909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:39.450209 containerd[1703]: time="2025-09-12T17:40:39.448614718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:39.482962 systemd[1]: Started cri-containerd-0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5.scope - libcontainer container 0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5. Sep 12 17:40:39.498907 systemd[1]: run-netns-cni\x2daa6dc371\x2d6516\x2d6398\x2dc5b1\x2d65b507f9a360.mount: Deactivated successfully. Sep 12 17:40:39.612108 containerd[1703]: time="2025-09-12T17:40:39.611979134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hhnb5,Uid:46635db1-496e-4289-8758-6f4e732d4253,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5\"" Sep 12 17:40:39.718336 containerd[1703]: time="2025-09-12T17:40:39.718285108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:39.720629 containerd[1703]: time="2025-09-12T17:40:39.720484252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:40:39.722967 containerd[1703]: time="2025-09-12T17:40:39.722911411Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:39.726400 containerd[1703]: time="2025-09-12T17:40:39.726348837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:39.727596 containerd[1703]: time="2025-09-12T17:40:39.727111387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.820031588s" Sep 12 17:40:39.727596 containerd[1703]: time="2025-09-12T17:40:39.727147989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:40:39.729704 containerd[1703]: time="2025-09-12T17:40:39.729675155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:39.749202 containerd[1703]: time="2025-09-12T17:40:39.749165933Z" level=info msg="CreateContainer within sandbox \"dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:40:39.777956 containerd[1703]: time="2025-09-12T17:40:39.777915519Z" level=info msg="CreateContainer within sandbox \"dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ffdd88686aa78291eb743c50294ae59ecb78f7df623800f4f88cce8f705cb717\"" Sep 12 17:40:39.779769 containerd[1703]: time="2025-09-12T17:40:39.778809778Z" level=info msg="StartContainer for \"ffdd88686aa78291eb743c50294ae59ecb78f7df623800f4f88cce8f705cb717\"" Sep 12 17:40:39.807929 systemd[1]: Started cri-containerd-ffdd88686aa78291eb743c50294ae59ecb78f7df623800f4f88cce8f705cb717.scope - libcontainer container ffdd88686aa78291eb743c50294ae59ecb78f7df623800f4f88cce8f705cb717. Sep 12 17:40:39.857038 containerd[1703]: time="2025-09-12T17:40:39.856992207Z" level=info msg="StartContainer for \"ffdd88686aa78291eb743c50294ae59ecb78f7df623800f4f88cce8f705cb717\" returns successfully" Sep 12 17:40:40.240082 systemd-networkd[1594]: calic37f8aa3407: Gained IPv6LL Sep 12 17:40:40.244007 kubelet[3224]: I0912 17:40:40.242655 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bfb88c844-q2vh8" podStartSLOduration=29.350370378 podStartE2EDuration="34.242628404s" podCreationTimestamp="2025-09-12 17:40:06 +0000 UTC" firstStartedPulling="2025-09-12 17:40:34.835917431 +0000 UTC m=+49.018044186" lastFinishedPulling="2025-09-12 17:40:39.728175457 +0000 UTC m=+53.910302212" observedRunningTime="2025-09-12 17:40:40.242007363 +0000 UTC m=+54.424134118" watchObservedRunningTime="2025-09-12 17:40:40.242628404 +0000 UTC m=+54.424755159" Sep 12 17:40:40.879896 systemd-networkd[1594]: cali52299064f0a: Gained IPv6LL Sep 12 17:40:43.255097 containerd[1703]: time="2025-09-12T17:40:43.255042895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:43.257224 containerd[1703]: time="2025-09-12T17:40:43.257159124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:40:43.259791 containerd[1703]: time="2025-09-12T17:40:43.259720579Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:43.263883 containerd[1703]: time="2025-09-12T17:40:43.263656719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:43.264399 containerd[1703]: time="2025-09-12T17:40:43.264364762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.534502895s" Sep 12 17:40:43.264479 containerd[1703]: time="2025-09-12T17:40:43.264406564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:43.266331 containerd[1703]: time="2025-09-12T17:40:43.266289179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:40:43.268362 containerd[1703]: time="2025-09-12T17:40:43.268329203Z" level=info msg="CreateContainer within sandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:43.294491 containerd[1703]: time="2025-09-12T17:40:43.294440290Z" level=info msg="CreateContainer within sandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\"" Sep 12 17:40:43.295291 containerd[1703]: time="2025-09-12T17:40:43.295248540Z" level=info msg="StartContainer for \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\"" Sep 12 17:40:43.333882 systemd[1]: run-containerd-runc-k8s.io-bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106-runc.jdW8NN.mount: Deactivated successfully. Sep 12 17:40:43.340921 systemd[1]: Started cri-containerd-bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106.scope - libcontainer container bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106. Sep 12 17:40:43.387942 containerd[1703]: time="2025-09-12T17:40:43.387885472Z" level=info msg="StartContainer for \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\" returns successfully" Sep 12 17:40:45.249899 kubelet[3224]: I0912 17:40:45.249857 3224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:45.927383 containerd[1703]: time="2025-09-12T17:40:45.927339282Z" level=info msg="StopPodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\"" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:45.981 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"46635db1-496e-4289-8758-6f4e732d4253", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5", Pod:"goldmane-54d579b49d-hhnb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali52299064f0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:45.982 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:45.982 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" iface="eth0" netns="" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:45.982 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:45.982 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.026 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.026 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.026 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.035 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.035 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.036 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.040463 containerd[1703]: 2025-09-12 17:40:46.038 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.041281 containerd[1703]: time="2025-09-12T17:40:46.040501763Z" level=info msg="TearDown network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" successfully" Sep 12 17:40:46.041281 containerd[1703]: time="2025-09-12T17:40:46.040531365Z" level=info msg="StopPodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" returns successfully" Sep 12 17:40:46.042330 containerd[1703]: time="2025-09-12T17:40:46.041858146Z" level=info msg="RemovePodSandbox for \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\"" Sep 12 17:40:46.042330 containerd[1703]: time="2025-09-12T17:40:46.041895148Z" level=info msg="Forcibly stopping sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\"" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.094 [WARNING][5897] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"46635db1-496e-4289-8758-6f4e732d4253", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5", Pod:"goldmane-54d579b49d-hhnb5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali52299064f0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.094 [INFO][5897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.094 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" iface="eth0" netns="" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.094 [INFO][5897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.094 [INFO][5897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.138 [INFO][5904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.139 [INFO][5904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.139 [INFO][5904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.147 [WARNING][5904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.147 [INFO][5904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" HandleID="k8s-pod-network.d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-goldmane--54d579b49d--hhnb5-eth0" Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.150 [INFO][5904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.160770 containerd[1703]: 2025-09-12 17:40:46.154 [INFO][5897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc" Sep 12 17:40:46.160770 containerd[1703]: time="2025-09-12T17:40:46.160708372Z" level=info msg="TearDown network for sandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" successfully" Sep 12 17:40:46.183032 containerd[1703]: time="2025-09-12T17:40:46.182860819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:46.183032 containerd[1703]: time="2025-09-12T17:40:46.182953925Z" level=info msg="RemovePodSandbox \"d0381a71da3645a4ec73627bb3211a3a22f2a1e76709d62be77047c444ec83dc\" returns successfully" Sep 12 17:40:46.186671 containerd[1703]: time="2025-09-12T17:40:46.186357532Z" level=info msg="StopPodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\"" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.256 [WARNING][5920] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0", GenerateName:"calico-kube-controllers-5bfb88c844-", Namespace:"calico-system", SelfLink:"", UID:"66fb902f-f57e-4463-81fb-6c863e23cadb", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfb88c844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd", Pod:"calico-kube-controllers-5bfb88c844-q2vh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52c8b3a3674", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.256 [INFO][5920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.256 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" iface="eth0" netns="" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.256 [INFO][5920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.256 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.295 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.295 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.295 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.304 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.304 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.306 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.310279 containerd[1703]: 2025-09-12 17:40:46.307 [INFO][5920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.311479 containerd[1703]: time="2025-09-12T17:40:46.310937907Z" level=info msg="TearDown network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" successfully" Sep 12 17:40:46.311479 containerd[1703]: time="2025-09-12T17:40:46.310979109Z" level=info msg="StopPodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" returns successfully" Sep 12 17:40:46.312118 containerd[1703]: time="2025-09-12T17:40:46.311913766Z" level=info msg="RemovePodSandbox for \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\"" Sep 12 17:40:46.312118 containerd[1703]: time="2025-09-12T17:40:46.311949068Z" level=info msg="Forcibly stopping sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\"" Sep 12 17:40:46.365107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277120736.mount: Deactivated successfully. Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.374 [WARNING][5942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0", GenerateName:"calico-kube-controllers-5bfb88c844-", Namespace:"calico-system", SelfLink:"", UID:"66fb902f-f57e-4463-81fb-6c863e23cadb", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfb88c844", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"dd0f52c4b956b2fc9153674fc927bdf99b196cf26fa96b723b2ba62a262312bd", Pod:"calico-kube-controllers-5bfb88c844-q2vh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52c8b3a3674", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.374 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.374 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" iface="eth0" netns="" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.374 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.374 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.399 [INFO][5949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.399 [INFO][5949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.399 [INFO][5949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.405 [WARNING][5949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.405 [INFO][5949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" HandleID="k8s-pod-network.1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--kube--controllers--5bfb88c844--q2vh8-eth0" Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.406 [INFO][5949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.409103 containerd[1703]: 2025-09-12 17:40:46.407 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc" Sep 12 17:40:46.409800 containerd[1703]: time="2025-09-12T17:40:46.409139378Z" level=info msg="TearDown network for sandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" successfully" Sep 12 17:40:46.417179 containerd[1703]: time="2025-09-12T17:40:46.417127264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:46.420004 containerd[1703]: time="2025-09-12T17:40:46.419960736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:46.420130 containerd[1703]: time="2025-09-12T17:40:46.420038541Z" level=info msg="RemovePodSandbox \"1dd787c5ce00e86dffe7d7676bb68b5d3c08e7a46687293b883c4694209191dc\" returns successfully" Sep 12 17:40:46.420774 containerd[1703]: time="2025-09-12T17:40:46.420505669Z" level=info msg="StopPodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\"" Sep 12 17:40:46.425428 containerd[1703]: time="2025-09-12T17:40:46.425393066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:40:46.425595 containerd[1703]: time="2025-09-12T17:40:46.425578178Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:46.430958 containerd[1703]: time="2025-09-12T17:40:46.430931003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:46.431383 containerd[1703]: time="2025-09-12T17:40:46.431362729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.164912141s" Sep 12 17:40:46.431468 containerd[1703]: time="2025-09-12T17:40:46.431455035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:40:46.434389 containerd[1703]: time="2025-09-12T17:40:46.433602065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:40:46.436888 containerd[1703]: time="2025-09-12T17:40:46.436860264Z" level=info msg="CreateContainer within sandbox \"1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:40:46.478065 containerd[1703]: time="2025-09-12T17:40:46.478016366Z" level=info msg="CreateContainer within sandbox \"1a09529ed9628a917a0453ad8e5a120cded798686445f9f308a4f10133a418fd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1f547d6dde65dd6bf8147c33bf280b17efa11b3ab5be32e4d0ff775bebf46cdd\"" Sep 12 17:40:46.479242 containerd[1703]: time="2025-09-12T17:40:46.479203638Z" level=info msg="StartContainer for \"1f547d6dde65dd6bf8147c33bf280b17efa11b3ab5be32e4d0ff775bebf46cdd\"" Sep 12 17:40:46.532966 systemd[1]: Started cri-containerd-1f547d6dde65dd6bf8147c33bf280b17efa11b3ab5be32e4d0ff775bebf46cdd.scope - libcontainer container 1f547d6dde65dd6bf8147c33bf280b17efa11b3ab5be32e4d0ff775bebf46cdd. Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.507 [WARNING][5967] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21f78540-0615-49df-a575-7f7ff1c93e62", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f", Pod:"coredns-668d6bf9bc-67d27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali839037d15d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.508 [INFO][5967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.508 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" iface="eth0" netns="" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.508 [INFO][5967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.508 [INFO][5967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.548 [INFO][5987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.549 [INFO][5987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.549 [INFO][5987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.557 [WARNING][5987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.557 [INFO][5987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.559 [INFO][5987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.562958 containerd[1703]: 2025-09-12 17:40:46.561 [INFO][5967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.564987 containerd[1703]: time="2025-09-12T17:40:46.563021835Z" level=info msg="TearDown network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" successfully" Sep 12 17:40:46.564987 containerd[1703]: time="2025-09-12T17:40:46.563052137Z" level=info msg="StopPodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" returns successfully" Sep 12 17:40:46.564987 containerd[1703]: time="2025-09-12T17:40:46.564060298Z" level=info msg="RemovePodSandbox for \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\"" Sep 12 17:40:46.564987 containerd[1703]: time="2025-09-12T17:40:46.564094300Z" level=info msg="Forcibly stopping sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\"" Sep 12 17:40:46.623940 containerd[1703]: time="2025-09-12T17:40:46.623708925Z" level=info msg="StartContainer for \"1f547d6dde65dd6bf8147c33bf280b17efa11b3ab5be32e4d0ff775bebf46cdd\" returns successfully" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.617 [WARNING][6017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21f78540-0615-49df-a575-7f7ff1c93e62", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"f3a5130662559df38dca327e56ae7dd86cfa141bccbfbfb07e42bc056694136f", Pod:"coredns-668d6bf9bc-67d27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali839037d15d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.617 [INFO][6017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.617 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" iface="eth0" netns="" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.617 [INFO][6017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.617 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.654 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.655 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.655 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.663 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.663 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" HandleID="k8s-pod-network.0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--67d27-eth0" Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.665 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.668652 containerd[1703]: 2025-09-12 17:40:46.666 [INFO][6017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d" Sep 12 17:40:46.669402 containerd[1703]: time="2025-09-12T17:40:46.668898572Z" level=info msg="TearDown network for sandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" successfully" Sep 12 17:40:46.682357 containerd[1703]: time="2025-09-12T17:40:46.682152178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:46.682357 containerd[1703]: time="2025-09-12T17:40:46.682230583Z" level=info msg="RemovePodSandbox \"0d949d9e6b23de0dce47d1056e738df6e8a772127295f880dc7ead2af6a17c6d\" returns successfully" Sep 12 17:40:46.682803 containerd[1703]: time="2025-09-12T17:40:46.682772816Z" level=info msg="StopPodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\"" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.714 [WARNING][6051] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"094c632a-fdc9-41e5-8058-f2de8effbf33", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767", Pod:"csi-node-driver-k4nvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic199875bcdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.714 [INFO][6051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.714 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" iface="eth0" netns="" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.714 [INFO][6051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.714 [INFO][6051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.734 [INFO][6059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.734 [INFO][6059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.734 [INFO][6059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.739 [WARNING][6059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.739 [INFO][6059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.740 [INFO][6059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.743115 containerd[1703]: 2025-09-12 17:40:46.741 [INFO][6051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.743115 containerd[1703]: time="2025-09-12T17:40:46.743011779Z" level=info msg="TearDown network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" successfully" Sep 12 17:40:46.743115 containerd[1703]: time="2025-09-12T17:40:46.743042381Z" level=info msg="StopPodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" returns successfully" Sep 12 17:40:46.745081 containerd[1703]: time="2025-09-12T17:40:46.745046403Z" level=info msg="RemovePodSandbox for \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\"" Sep 12 17:40:46.745183 containerd[1703]: time="2025-09-12T17:40:46.745085005Z" level=info msg="Forcibly stopping sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\"" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.781 [WARNING][6073] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"094c632a-fdc9-41e5-8058-f2de8effbf33", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767", Pod:"csi-node-driver-k4nvw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic199875bcdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.782 [INFO][6073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.782 [INFO][6073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" iface="eth0" netns="" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.782 [INFO][6073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.782 [INFO][6073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.801 [INFO][6081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.802 [INFO][6081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.802 [INFO][6081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.807 [WARNING][6081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.807 [INFO][6081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" HandleID="k8s-pod-network.220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Workload="ci--4081.3.6--a--da806c5a3d-k8s-csi--node--driver--k4nvw-eth0" Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.808 [INFO][6081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.810551 containerd[1703]: 2025-09-12 17:40:46.809 [INFO][6073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c" Sep 12 17:40:46.811230 containerd[1703]: time="2025-09-12T17:40:46.810644791Z" level=info msg="TearDown network for sandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" successfully" Sep 12 17:40:46.818119 containerd[1703]: time="2025-09-12T17:40:46.818070543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:46.818410 containerd[1703]: time="2025-09-12T17:40:46.818142647Z" level=info msg="RemovePodSandbox \"220602f8aeb580c9dc522f34b3433768bb0dd0d197799dcdbf4b72967f0f633c\" returns successfully" Sep 12 17:40:46.818682 containerd[1703]: time="2025-09-12T17:40:46.818640477Z" level=info msg="StopPodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\"" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.851 [WARNING][6095] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"78d725d5-0368-4c83-a47c-f7b5ec0c5f89", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648", Pod:"calico-apiserver-7f8c6cc887-x9kbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05d69cdaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.851 [INFO][6095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.851 [INFO][6095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" iface="eth0" netns="" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.851 [INFO][6095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.851 [INFO][6095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.873 [INFO][6102] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.873 [INFO][6102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.873 [INFO][6102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.880 [WARNING][6102] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.881 [INFO][6102] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.882 [INFO][6102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.885400 containerd[1703]: 2025-09-12 17:40:46.883 [INFO][6095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.885400 containerd[1703]: time="2025-09-12T17:40:46.885290730Z" level=info msg="TearDown network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" successfully" Sep 12 17:40:46.885400 containerd[1703]: time="2025-09-12T17:40:46.885310631Z" level=info msg="StopPodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" returns successfully" Sep 12 17:40:46.886457 containerd[1703]: time="2025-09-12T17:40:46.886142682Z" level=info msg="RemovePodSandbox for \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\"" Sep 12 17:40:46.886457 containerd[1703]: time="2025-09-12T17:40:46.886183084Z" level=info msg="Forcibly stopping sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\"" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.917 [WARNING][6117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"78d725d5-0368-4c83-a47c-f7b5ec0c5f89", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648", Pod:"calico-apiserver-7f8c6cc887-x9kbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid05d69cdaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.917 [INFO][6117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.918 [INFO][6117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" iface="eth0" netns="" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.918 [INFO][6117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.918 [INFO][6117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.939 [INFO][6124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.939 [INFO][6124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.939 [INFO][6124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.945 [WARNING][6124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.945 [INFO][6124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" HandleID="k8s-pod-network.3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.947 [INFO][6124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:46.951215 containerd[1703]: 2025-09-12 17:40:46.948 [INFO][6117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2" Sep 12 17:40:46.951215 containerd[1703]: time="2025-09-12T17:40:46.949754150Z" level=info msg="TearDown network for sandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" successfully" Sep 12 17:40:46.956151 containerd[1703]: time="2025-09-12T17:40:46.956110636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:46.956258 containerd[1703]: time="2025-09-12T17:40:46.956184841Z" level=info msg="RemovePodSandbox \"3ddcdca3e998d26800b29e1e12f6aa4af3c0b650335874d1190ce696bf5368f2\" returns successfully" Sep 12 17:40:46.956667 containerd[1703]: time="2025-09-12T17:40:46.956640468Z" level=info msg="StopPodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\"" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:46.987 [WARNING][6138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0", GenerateName:"calico-apiserver-fcb6c89c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e55732e-c069-4255-903c-d2d265e35b14", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb6c89c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c", Pod:"calico-apiserver-fcb6c89c9-wrztx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fc85137f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:46.987 [INFO][6138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:46.987 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" iface="eth0" netns="" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:46.987 [INFO][6138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:46.987 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.008 [INFO][6145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.008 [INFO][6145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.008 [INFO][6145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.014 [WARNING][6145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.014 [INFO][6145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.015 [INFO][6145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.018406 containerd[1703]: 2025-09-12 17:40:47.016 [INFO][6138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.020287 containerd[1703]: time="2025-09-12T17:40:47.018386523Z" level=info msg="TearDown network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" successfully" Sep 12 17:40:47.020287 containerd[1703]: time="2025-09-12T17:40:47.020165731Z" level=info msg="StopPodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" returns successfully" Sep 12 17:40:47.020681 containerd[1703]: time="2025-09-12T17:40:47.020652461Z" level=info msg="RemovePodSandbox for \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\"" Sep 12 17:40:47.020844 containerd[1703]: time="2025-09-12T17:40:47.020686863Z" level=info msg="Forcibly stopping sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\"" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.056 [WARNING][6160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0", GenerateName:"calico-apiserver-fcb6c89c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e55732e-c069-4255-903c-d2d265e35b14", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb6c89c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c", Pod:"calico-apiserver-fcb6c89c9-wrztx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fc85137f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.057 [INFO][6160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.057 [INFO][6160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" iface="eth0" netns="" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.057 [INFO][6160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.057 [INFO][6160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.081 [INFO][6168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.081 [INFO][6168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.081 [INFO][6168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.087 [WARNING][6168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.087 [INFO][6168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" HandleID="k8s-pod-network.df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--wrztx-eth0" Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.089 [INFO][6168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.091596 containerd[1703]: 2025-09-12 17:40:47.090 [INFO][6160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27" Sep 12 17:40:47.092462 containerd[1703]: time="2025-09-12T17:40:47.091640377Z" level=info msg="TearDown network for sandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" successfully" Sep 12 17:40:47.104704 containerd[1703]: time="2025-09-12T17:40:47.104661769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:47.104871 containerd[1703]: time="2025-09-12T17:40:47.104734773Z" level=info msg="RemovePodSandbox \"df650ba6f1391d60df12664d70beab8805ab5887596ed4f873b1c6afdde36d27\" returns successfully" Sep 12 17:40:47.105616 containerd[1703]: time="2025-09-12T17:40:47.105300808Z" level=info msg="StopPodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\"" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.138 [WARNING][6182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"63bab1f9-13e1-4261-8028-902eba3c8e04", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34", Pod:"coredns-668d6bf9bc-njvm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic37f8aa3407", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.139 [INFO][6182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.139 [INFO][6182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" iface="eth0" netns="" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.139 [INFO][6182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.139 [INFO][6182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.160 [INFO][6189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.161 [INFO][6189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.161 [INFO][6189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.167 [WARNING][6189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.167 [INFO][6189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.168 [INFO][6189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.170974 containerd[1703]: 2025-09-12 17:40:47.169 [INFO][6182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.171732 containerd[1703]: time="2025-09-12T17:40:47.171511934Z" level=info msg="TearDown network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" successfully" Sep 12 17:40:47.171732 containerd[1703]: time="2025-09-12T17:40:47.171553136Z" level=info msg="StopPodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" returns successfully" Sep 12 17:40:47.172450 containerd[1703]: time="2025-09-12T17:40:47.172321583Z" level=info msg="RemovePodSandbox for \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\"" Sep 12 17:40:47.172450 containerd[1703]: time="2025-09-12T17:40:47.172349285Z" level=info msg="Forcibly stopping sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\"" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.201 [WARNING][6203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"63bab1f9-13e1-4261-8028-902eba3c8e04", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"7c0f7476ed2c752aaf724334551cbc40e70cdffbb082a07a64e05f828d598e34", Pod:"coredns-668d6bf9bc-njvm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic37f8aa3407", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.202 [INFO][6203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.202 [INFO][6203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" iface="eth0" netns="" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.202 [INFO][6203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.202 [INFO][6203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.223 [INFO][6210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.223 [INFO][6210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.223 [INFO][6210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.229 [WARNING][6210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.229 [INFO][6210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" HandleID="k8s-pod-network.e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Workload="ci--4081.3.6--a--da806c5a3d-k8s-coredns--668d6bf9bc--njvm2-eth0" Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.230 [INFO][6210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.232762 containerd[1703]: 2025-09-12 17:40:47.231 [INFO][6203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5" Sep 12 17:40:47.232762 containerd[1703]: time="2025-09-12T17:40:47.232713655Z" level=info msg="TearDown network for sandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" successfully" Sep 12 17:40:47.243037 containerd[1703]: time="2025-09-12T17:40:47.242991580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:47.243170 containerd[1703]: time="2025-09-12T17:40:47.243062884Z" level=info msg="RemovePodSandbox \"e4d974a71b4d43ab5b17cbf75b81c15566ca514ece43a2a33b6d5527d1231ae5\" returns successfully" Sep 12 17:40:47.243677 containerd[1703]: time="2025-09-12T17:40:47.243644820Z" level=info msg="StopPodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\"" Sep 12 17:40:47.291403 kubelet[3224]: I0912 17:40:47.291249 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f8c6cc887-h9b8d" podStartSLOduration=36.908141864 podStartE2EDuration="45.291226913s" podCreationTimestamp="2025-09-12 17:40:02 +0000 UTC" firstStartedPulling="2025-09-12 17:40:34.882699099 +0000 UTC m=+49.064825854" lastFinishedPulling="2025-09-12 17:40:43.265784048 +0000 UTC m=+57.447910903" observedRunningTime="2025-09-12 17:40:44.265037107 +0000 UTC m=+58.447163862" watchObservedRunningTime="2025-09-12 17:40:47.291226913 +0000 UTC m=+61.473353668" Sep 12 17:40:47.291960 kubelet[3224]: I0912 17:40:47.291574 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77b4c58d6c-r968s" podStartSLOduration=1.71234818 podStartE2EDuration="15.291562133s" podCreationTimestamp="2025-09-12 17:40:32 +0000 UTC" firstStartedPulling="2025-09-12 17:40:32.853369951 +0000 UTC m=+47.035496706" lastFinishedPulling="2025-09-12 17:40:46.432583804 +0000 UTC m=+60.614710659" observedRunningTime="2025-09-12 17:40:47.290799087 +0000 UTC m=+61.472925942" watchObservedRunningTime="2025-09-12 17:40:47.291562133 +0000 UTC m=+61.473688888" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.293 [WARNING][6224] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.294 [INFO][6224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.294 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" iface="eth0" netns="" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.294 [INFO][6224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.294 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.335 [INFO][6231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.335 [INFO][6231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.336 [INFO][6231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.351 [WARNING][6231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.351 [INFO][6231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.353 [INFO][6231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.357571 containerd[1703]: 2025-09-12 17:40:47.355 [INFO][6224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.359032 containerd[1703]: time="2025-09-12T17:40:47.357606449Z" level=info msg="TearDown network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" successfully" Sep 12 17:40:47.359032 containerd[1703]: time="2025-09-12T17:40:47.357643451Z" level=info msg="StopPodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" returns successfully" Sep 12 17:40:47.359032 containerd[1703]: time="2025-09-12T17:40:47.358154682Z" level=info msg="RemovePodSandbox for \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\"" Sep 12 17:40:47.359032 containerd[1703]: time="2025-09-12T17:40:47.358187884Z" level=info msg="Forcibly stopping sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\"" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.395 [WARNING][6248] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.395 [INFO][6248] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.395 [INFO][6248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" iface="eth0" netns="" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.395 [INFO][6248] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.395 [INFO][6248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.428 [INFO][6256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.430 [INFO][6256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.430 [INFO][6256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.437 [WARNING][6256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.437 [INFO][6256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" HandleID="k8s-pod-network.4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Workload="ci--4081.3.6--a--da806c5a3d-k8s-whisker--6dd6b9ffc6--zl92p-eth0" Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.438 [INFO][6256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.441974 containerd[1703]: 2025-09-12 17:40:47.440 [INFO][6248] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03" Sep 12 17:40:47.441974 containerd[1703]: time="2025-09-12T17:40:47.441962478Z" level=info msg="TearDown network for sandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" successfully" Sep 12 17:40:47.452769 containerd[1703]: time="2025-09-12T17:40:47.452550422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:47.452769 containerd[1703]: time="2025-09-12T17:40:47.452627727Z" level=info msg="RemovePodSandbox \"4be43cdb4979616632da727694fbf90a68d2ccfcdaa175c3033165c5d74c7b03\" returns successfully" Sep 12 17:40:47.453625 containerd[1703]: time="2025-09-12T17:40:47.453189961Z" level=info msg="StopPodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\"" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.495 [WARNING][6270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"a75fb64e-a5f0-43ca-beae-d16320e589c8", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79", Pod:"calico-apiserver-7f8c6cc887-h9b8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1427382d7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.496 [INFO][6270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.496 [INFO][6270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" iface="eth0" netns="" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.496 [INFO][6270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.496 [INFO][6270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.516 [INFO][6277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.516 [INFO][6277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.516 [INFO][6277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.524 [WARNING][6277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.524 [INFO][6277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.525 [INFO][6277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.528345 containerd[1703]: 2025-09-12 17:40:47.527 [INFO][6270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.528940 containerd[1703]: time="2025-09-12T17:40:47.528410335Z" level=info msg="TearDown network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" successfully" Sep 12 17:40:47.528940 containerd[1703]: time="2025-09-12T17:40:47.528462138Z" level=info msg="StopPodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" returns successfully" Sep 12 17:40:47.529634 containerd[1703]: time="2025-09-12T17:40:47.529599207Z" level=info msg="RemovePodSandbox for \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\"" Sep 12 17:40:47.529634 containerd[1703]: time="2025-09-12T17:40:47.529632209Z" level=info msg="Forcibly stopping sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\"" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.563 [WARNING][6291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0", GenerateName:"calico-apiserver-7f8c6cc887-", Namespace:"calico-apiserver", SelfLink:"", UID:"a75fb64e-a5f0-43ca-beae-d16320e589c8", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8c6cc887", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79", Pod:"calico-apiserver-7f8c6cc887-h9b8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1427382d7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.563 [INFO][6291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.563 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" iface="eth0" netns="" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.563 [INFO][6291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.563 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.588 [INFO][6298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.588 [INFO][6298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.588 [INFO][6298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.594 [WARNING][6298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.594 [INFO][6298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" HandleID="k8s-pod-network.4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.595 [INFO][6298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:47.598287 containerd[1703]: 2025-09-12 17:40:47.597 [INFO][6291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b" Sep 12 17:40:47.598287 containerd[1703]: time="2025-09-12T17:40:47.598222079Z" level=info msg="TearDown network for sandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" successfully" Sep 12 17:40:47.606960 containerd[1703]: time="2025-09-12T17:40:47.606916508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:47.607093 containerd[1703]: time="2025-09-12T17:40:47.606988713Z" level=info msg="RemovePodSandbox \"4aba4e6c3bcf3bd349b85a06ba0a99ac8a3ce11d6ecc56b693354299b7de3c1b\" returns successfully" Sep 12 17:40:47.820915 kubelet[3224]: I0912 17:40:47.820293 3224 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:54.301984 containerd[1703]: time="2025-09-12T17:40:54.301924300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:54.304452 containerd[1703]: time="2025-09-12T17:40:54.304250628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:40:54.309768 containerd[1703]: time="2025-09-12T17:40:54.309521419Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:54.313256 containerd[1703]: time="2025-09-12T17:40:54.313223823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:54.314547 containerd[1703]: time="2025-09-12T17:40:54.313850557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 7.88021129s" Sep 12 17:40:54.314547 containerd[1703]: time="2025-09-12T17:40:54.313886759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:40:54.315471 containerd[1703]: time="2025-09-12T17:40:54.315430644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:54.317208 containerd[1703]: time="2025-09-12T17:40:54.317177441Z" level=info msg="CreateContainer within sandbox \"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:40:54.346356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503113781.mount: Deactivated successfully. Sep 12 17:40:54.349371 containerd[1703]: time="2025-09-12T17:40:54.349327812Z" level=info msg="CreateContainer within sandbox \"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"88fbef588aa4da906b537ac8dade4a236b83a6aaff3c11d5786b6d093cd0ef1c\"" Sep 12 17:40:54.350410 containerd[1703]: time="2025-09-12T17:40:54.350371369Z" level=info msg="StartContainer for \"88fbef588aa4da906b537ac8dade4a236b83a6aaff3c11d5786b6d093cd0ef1c\"" Sep 12 17:40:54.404930 systemd[1]: Started cri-containerd-88fbef588aa4da906b537ac8dade4a236b83a6aaff3c11d5786b6d093cd0ef1c.scope - libcontainer container 88fbef588aa4da906b537ac8dade4a236b83a6aaff3c11d5786b6d093cd0ef1c. Sep 12 17:40:54.444070 containerd[1703]: time="2025-09-12T17:40:54.444030129Z" level=info msg="StartContainer for \"88fbef588aa4da906b537ac8dade4a236b83a6aaff3c11d5786b6d093cd0ef1c\" returns successfully" Sep 12 17:40:54.682311 containerd[1703]: time="2025-09-12T17:40:54.682259055Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:54.685301 containerd[1703]: time="2025-09-12T17:40:54.684759192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:40:54.686972 containerd[1703]: time="2025-09-12T17:40:54.686938912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 371.341759ms" Sep 12 17:40:54.686972 containerd[1703]: time="2025-09-12T17:40:54.686971814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:54.688115 containerd[1703]: time="2025-09-12T17:40:54.687924867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:54.691103 containerd[1703]: time="2025-09-12T17:40:54.690622915Z" level=info msg="CreateContainer within sandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:54.719863 containerd[1703]: time="2025-09-12T17:40:54.719816624Z" level=info msg="CreateContainer within sandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\"" Sep 12 17:40:54.721766 containerd[1703]: time="2025-09-12T17:40:54.720706373Z" level=info msg="StartContainer for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\"" Sep 12 17:40:54.749897 systemd[1]: Started cri-containerd-f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786.scope - libcontainer container f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786. Sep 12 17:40:54.801842 containerd[1703]: time="2025-09-12T17:40:54.801796740Z" level=info msg="StartContainer for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" returns successfully" Sep 12 17:40:55.048916 containerd[1703]: time="2025-09-12T17:40:55.048781348Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.051823 containerd[1703]: time="2025-09-12T17:40:55.051770513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:40:55.054161 containerd[1703]: time="2025-09-12T17:40:55.054119642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 366.157173ms" Sep 12 17:40:55.054271 containerd[1703]: time="2025-09-12T17:40:55.054173045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:55.058607 containerd[1703]: time="2025-09-12T17:40:55.058569887Z" level=info msg="CreateContainer within sandbox \"a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:55.059277 containerd[1703]: time="2025-09-12T17:40:55.059245425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:40:55.099083 containerd[1703]: time="2025-09-12T17:40:55.098901209Z" level=info msg="CreateContainer within sandbox \"a4929e48e07ccae7b5776fe670194055baa9f7d45168d5693a97fb5c7b33127c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"043635bd2a6d8f6767fbc1215618b5eb71bd2d81c992f7af85e0b697b5e906f4\"" Sep 12 17:40:55.100598 containerd[1703]: time="2025-09-12T17:40:55.100387491Z" level=info msg="StartContainer for \"043635bd2a6d8f6767fbc1215618b5eb71bd2d81c992f7af85e0b697b5e906f4\"" Sep 12 17:40:55.138175 systemd[1]: Started cri-containerd-043635bd2a6d8f6767fbc1215618b5eb71bd2d81c992f7af85e0b697b5e906f4.scope - libcontainer container 043635bd2a6d8f6767fbc1215618b5eb71bd2d81c992f7af85e0b697b5e906f4. Sep 12 17:40:55.206041 containerd[1703]: time="2025-09-12T17:40:55.205984909Z" level=info msg="StartContainer for \"043635bd2a6d8f6767fbc1215618b5eb71bd2d81c992f7af85e0b697b5e906f4\" returns successfully" Sep 12 17:40:55.333832 kubelet[3224]: I0912 17:40:55.333173 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f8c6cc887-x9kbt" podStartSLOduration=36.712622759 podStartE2EDuration="53.333151815s" podCreationTimestamp="2025-09-12 17:40:02 +0000 UTC" firstStartedPulling="2025-09-12 17:40:38.067262803 +0000 UTC m=+52.249389558" lastFinishedPulling="2025-09-12 17:40:54.687791859 +0000 UTC m=+68.869918614" observedRunningTime="2025-09-12 17:40:55.332890001 +0000 UTC m=+69.515016856" watchObservedRunningTime="2025-09-12 17:40:55.333151815 +0000 UTC m=+69.515278670" Sep 12 17:40:55.909783 kubelet[3224]: I0912 17:40:55.909704 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fcb6c89c9-wrztx" podStartSLOduration=36.084945693 podStartE2EDuration="52.909681379s" podCreationTimestamp="2025-09-12 17:40:03 +0000 UTC" firstStartedPulling="2025-09-12 17:40:38.231058148 +0000 UTC m=+52.413185003" lastFinishedPulling="2025-09-12 17:40:55.055793934 +0000 UTC m=+69.237920689" observedRunningTime="2025-09-12 17:40:55.372558386 +0000 UTC m=+69.554685141" watchObservedRunningTime="2025-09-12 17:40:55.909681379 +0000 UTC m=+70.091808134" Sep 12 17:40:56.469187 systemd[1]: Created slice kubepods-besteffort-pod0547400d_0a9e_4cf1_be75_6f4f516a630f.slice - libcontainer container kubepods-besteffort-pod0547400d_0a9e_4cf1_be75_6f4f516a630f.slice. Sep 12 17:40:56.584338 kubelet[3224]: I0912 17:40:56.584133 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmd9\" (UniqueName: \"kubernetes.io/projected/0547400d-0a9e-4cf1-be75-6f4f516a630f-kube-api-access-gpmd9\") pod \"calico-apiserver-fcb6c89c9-rr9f8\" (UID: \"0547400d-0a9e-4cf1-be75-6f4f516a630f\") " pod="calico-apiserver/calico-apiserver-fcb6c89c9-rr9f8" Sep 12 17:40:56.584338 kubelet[3224]: I0912 17:40:56.584210 3224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0547400d-0a9e-4cf1-be75-6f4f516a630f-calico-apiserver-certs\") pod \"calico-apiserver-fcb6c89c9-rr9f8\" (UID: \"0547400d-0a9e-4cf1-be75-6f4f516a630f\") " pod="calico-apiserver/calico-apiserver-fcb6c89c9-rr9f8" Sep 12 17:40:56.779809 containerd[1703]: time="2025-09-12T17:40:56.779652210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb6c89c9-rr9f8,Uid:0547400d-0a9e-4cf1-be75-6f4f516a630f,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:56.963686 systemd-networkd[1594]: calib4aabd20c77: Link UP Sep 12 17:40:56.965386 systemd-networkd[1594]: calib4aabd20c77: Gained carrier Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.869 [INFO][6444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0 calico-apiserver-fcb6c89c9- calico-apiserver 0547400d-0a9e-4cf1-be75-6f4f516a630f 1159 0 2025-09-12 17:40:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fcb6c89c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-da806c5a3d calico-apiserver-fcb6c89c9-rr9f8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4aabd20c77 [] [] }} ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.869 [INFO][6444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.905 [INFO][6457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" HandleID="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.905 [INFO][6457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" HandleID="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-da806c5a3d", "pod":"calico-apiserver-fcb6c89c9-rr9f8", "timestamp":"2025-09-12 17:40:56.904985956 +0000 UTC"}, Hostname:"ci-4081.3.6-a-da806c5a3d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.905 [INFO][6457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.905 [INFO][6457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.905 [INFO][6457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-da806c5a3d' Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.912 [INFO][6457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.917 [INFO][6457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.921 [INFO][6457] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.923 [INFO][6457] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.926 [INFO][6457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.926 [INFO][6457] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.927 [INFO][6457] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722 Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.934 [INFO][6457] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.955 [INFO][6457] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.74/26] block=192.168.126.64/26 handle="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.955 [INFO][6457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.74/26] handle="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" host="ci-4081.3.6-a-da806c5a3d" Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.955 [INFO][6457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:56.985447 containerd[1703]: 2025-09-12 17:40:56.955 [INFO][6457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.74/26] IPv6=[] ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" HandleID="k8s-pod-network.43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:56.987597 containerd[1703]: 2025-09-12 17:40:56.957 [INFO][6444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0", GenerateName:"calico-apiserver-fcb6c89c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0547400d-0a9e-4cf1-be75-6f4f516a630f", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb6c89c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"", Pod:"calico-apiserver-fcb6c89c9-rr9f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4aabd20c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:56.987597 containerd[1703]: 2025-09-12 17:40:56.957 [INFO][6444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.74/32] ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:56.987597 containerd[1703]: 2025-09-12 17:40:56.957 [INFO][6444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4aabd20c77 ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:56.987597 containerd[1703]: 2025-09-12 17:40:56.966 [INFO][6444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:56.987597 containerd[1703]: 2025-09-12 17:40:56.966 [INFO][6444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0", GenerateName:"calico-apiserver-fcb6c89c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0547400d-0a9e-4cf1-be75-6f4f516a630f", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fcb6c89c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-da806c5a3d", ContainerID:"43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722", Pod:"calico-apiserver-fcb6c89c9-rr9f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.74/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4aabd20c77", MAC:"86:39:e6:9d:8f:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:56.987597 containerd[1703]: 2025-09-12 17:40:56.983 [INFO][6444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722" Namespace="calico-apiserver" Pod="calico-apiserver-fcb6c89c9-rr9f8" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--fcb6c89c9--rr9f8-eth0" Sep 12 17:40:57.442064 containerd[1703]: time="2025-09-12T17:40:57.441692871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:57.442064 containerd[1703]: time="2025-09-12T17:40:57.441791677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:57.442064 containerd[1703]: time="2025-09-12T17:40:57.441814279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:57.442064 containerd[1703]: time="2025-09-12T17:40:57.441917185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:57.472953 systemd[1]: Started cri-containerd-43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722.scope - libcontainer container 43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722. Sep 12 17:40:57.567619 containerd[1703]: time="2025-09-12T17:40:57.567572097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fcb6c89c9-rr9f8,Uid:0547400d-0a9e-4cf1-be75-6f4f516a630f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722\"" Sep 12 17:40:57.571477 containerd[1703]: time="2025-09-12T17:40:57.571438632Z" level=info msg="CreateContainer within sandbox \"43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:57.607302 containerd[1703]: time="2025-09-12T17:40:57.607258602Z" level=info msg="CreateContainer within sandbox \"43e0f96f4c6623f2b84a86c38045c80ec84350c2b9e67d15e4e73c03257a6722\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"69f9240bdc264bd3a9496a8fe3b11a852b12d8d86d6ab98b942d48cefe0327b0\"" Sep 12 17:40:57.608724 containerd[1703]: time="2025-09-12T17:40:57.608687188Z" level=info msg="StartContainer for \"69f9240bdc264bd3a9496a8fe3b11a852b12d8d86d6ab98b942d48cefe0327b0\"" Sep 12 17:40:57.653058 systemd[1]: Started cri-containerd-69f9240bdc264bd3a9496a8fe3b11a852b12d8d86d6ab98b942d48cefe0327b0.scope - libcontainer container 69f9240bdc264bd3a9496a8fe3b11a852b12d8d86d6ab98b942d48cefe0327b0. Sep 12 17:40:57.731516 containerd[1703]: time="2025-09-12T17:40:57.730728382Z" level=info msg="StartContainer for \"69f9240bdc264bd3a9496a8fe3b11a852b12d8d86d6ab98b942d48cefe0327b0\" returns successfully" Sep 12 17:40:58.328810 containerd[1703]: time="2025-09-12T17:40:58.328762812Z" level=info msg="StopContainer for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" with timeout 30 (s)" Sep 12 17:40:58.332479 containerd[1703]: time="2025-09-12T17:40:58.332427934Z" level=info msg="Stop container \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" with signal terminated" Sep 12 17:40:58.350361 kubelet[3224]: I0912 17:40:58.349872 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fcb6c89c9-rr9f8" podStartSLOduration=2.349848389 podStartE2EDuration="2.349848389s" podCreationTimestamp="2025-09-12 17:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:58.346823806 +0000 UTC m=+72.528950561" watchObservedRunningTime="2025-09-12 17:40:58.349848389 +0000 UTC m=+72.531975144" Sep 12 17:40:58.386965 systemd[1]: cri-containerd-f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786.scope: Deactivated successfully. Sep 12 17:40:58.442182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786-rootfs.mount: Deactivated successfully. Sep 12 17:40:58.573255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238701446.mount: Deactivated successfully. Sep 12 17:40:58.862984 systemd-networkd[1594]: calib4aabd20c77: Gained IPv6LL Sep 12 17:40:59.639244 containerd[1703]: time="2025-09-12T17:40:59.639010496Z" level=info msg="StopContainer for \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\" with timeout 30 (s)" Sep 12 17:40:59.642400 containerd[1703]: time="2025-09-12T17:40:59.640291066Z" level=info msg="Stop container \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\" with signal terminated" Sep 12 17:40:59.684901 systemd[1]: cri-containerd-bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106.scope: Deactivated successfully. Sep 12 17:40:59.685469 systemd[1]: cri-containerd-bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106.scope: Consumed 1.367s CPU time. Sep 12 17:40:59.713586 containerd[1703]: time="2025-09-12T17:40:59.713233150Z" level=info msg="shim disconnected" id=bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106 namespace=k8s.io Sep 12 17:40:59.713586 containerd[1703]: time="2025-09-12T17:40:59.713295054Z" level=warning msg="cleaning up after shim disconnected" id=bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106 namespace=k8s.io Sep 12 17:40:59.713586 containerd[1703]: time="2025-09-12T17:40:59.713307054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:59.715867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106-rootfs.mount: Deactivated successfully. Sep 12 17:40:59.773242 containerd[1703]: time="2025-09-12T17:40:59.772706799Z" level=info msg="shim disconnected" id=f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786 namespace=k8s.io Sep 12 17:40:59.773242 containerd[1703]: time="2025-09-12T17:40:59.772786803Z" level=warning msg="cleaning up after shim disconnected" id=f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786 namespace=k8s.io Sep 12 17:40:59.773242 containerd[1703]: time="2025-09-12T17:40:59.772798204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:59.778375 containerd[1703]: time="2025-09-12T17:40:59.778222100Z" level=info msg="StopContainer for \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\" returns successfully" Sep 12 17:40:59.780770 containerd[1703]: time="2025-09-12T17:40:59.779406465Z" level=info msg="StopPodSandbox for \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\"" Sep 12 17:40:59.780770 containerd[1703]: time="2025-09-12T17:40:59.779465768Z" level=info msg="Container to stop \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:40:59.785635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79-shm.mount: Deactivated successfully. Sep 12 17:40:59.801846 systemd[1]: cri-containerd-4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79.scope: Deactivated successfully. Sep 12 17:40:59.802502 containerd[1703]: time="2025-09-12T17:40:59.802468925Z" level=info msg="StopContainer for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" returns successfully" Sep 12 17:40:59.804671 containerd[1703]: time="2025-09-12T17:40:59.804538738Z" level=info msg="StopPodSandbox for \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\"" Sep 12 17:40:59.804790 containerd[1703]: time="2025-09-12T17:40:59.804695346Z" level=info msg="Container to stop \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:40:59.818924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648-shm.mount: Deactivated successfully. Sep 12 17:40:59.832732 systemd[1]: cri-containerd-f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648.scope: Deactivated successfully. Sep 12 17:40:59.852510 containerd[1703]: time="2025-09-12T17:40:59.852290646Z" level=info msg="shim disconnected" id=4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79 namespace=k8s.io Sep 12 17:40:59.852510 containerd[1703]: time="2025-09-12T17:40:59.852347649Z" level=warning msg="cleaning up after shim disconnected" id=4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79 namespace=k8s.io Sep 12 17:40:59.852510 containerd[1703]: time="2025-09-12T17:40:59.852358350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:59.854886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79-rootfs.mount: Deactivated successfully. Sep 12 17:40:59.891531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648-rootfs.mount: Deactivated successfully. Sep 12 17:40:59.894710 containerd[1703]: time="2025-09-12T17:40:59.894431248Z" level=info msg="shim disconnected" id=f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648 namespace=k8s.io Sep 12 17:40:59.895183 containerd[1703]: time="2025-09-12T17:40:59.895034581Z" level=warning msg="cleaning up after shim disconnected" id=f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648 namespace=k8s.io Sep 12 17:40:59.895570 containerd[1703]: time="2025-09-12T17:40:59.895062082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:59.897568 containerd[1703]: time="2025-09-12T17:40:59.896538563Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:40:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:40:59.915731 containerd[1703]: time="2025-09-12T17:40:59.915660407Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:40:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:41:00.056970 systemd-networkd[1594]: calid05d69cdaa3: Link DOWN Sep 12 17:41:00.056984 systemd-networkd[1594]: calid05d69cdaa3: Lost carrier Sep 12 17:41:00.081657 systemd-networkd[1594]: calic1427382d7f: Link DOWN Sep 12 17:41:00.081666 systemd-networkd[1594]: calic1427382d7f: Lost carrier Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.054 [INFO][6723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.054 [INFO][6723] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" iface="eth0" netns="/var/run/netns/cni-65cb318a-6d93-c23b-49db-f42407b425c0" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.054 [INFO][6723] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" iface="eth0" netns="/var/run/netns/cni-65cb318a-6d93-c23b-49db-f42407b425c0" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.068 [INFO][6723] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" after=13.846156ms iface="eth0" netns="/var/run/netns/cni-65cb318a-6d93-c23b-49db-f42407b425c0" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.068 [INFO][6723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.068 [INFO][6723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.169 [INFO][6736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.171 [INFO][6736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.172 [INFO][6736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.244 [INFO][6736] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.245 [INFO][6736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.247 [INFO][6736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:00.254244 containerd[1703]: 2025-09-12 17:41:00.249 [INFO][6723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:00.256128 containerd[1703]: time="2025-09-12T17:41:00.255610176Z" level=info msg="TearDown network for sandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" successfully" Sep 12 17:41:00.256128 containerd[1703]: time="2025-09-12T17:41:00.255654079Z" level=info msg="StopPodSandbox for \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" returns successfully" Sep 12 17:41:00.332735 kubelet[3224]: I0912 17:41:00.332698 3224 scope.go:117] "RemoveContainer" containerID="f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786" Sep 12 17:41:00.338731 containerd[1703]: time="2025-09-12T17:41:00.338686314Z" level=info msg="RemoveContainer for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\"" Sep 12 17:41:00.340765 kubelet[3224]: I0912 17:41:00.340671 3224 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:00.355759 containerd[1703]: time="2025-09-12T17:41:00.355075309Z" level=info msg="RemoveContainer for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" returns successfully" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.079 [INFO][6714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.079 [INFO][6714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" iface="eth0" netns="/var/run/netns/cni-5814516f-a9e8-6d38-c55d-74b3df6a5a6d" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.080 [INFO][6714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" iface="eth0" netns="/var/run/netns/cni-5814516f-a9e8-6d38-c55d-74b3df6a5a6d" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.096 [INFO][6714] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" after=16.954526ms iface="eth0" netns="/var/run/netns/cni-5814516f-a9e8-6d38-c55d-74b3df6a5a6d" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.096 [INFO][6714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.097 [INFO][6714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.192 [INFO][6743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.192 [INFO][6743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.247 [INFO][6743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.346 [INFO][6743] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.346 [INFO][6743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.349 [INFO][6743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:00.356365 containerd[1703]: 2025-09-12 17:41:00.351 [INFO][6714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:00.357201 containerd[1703]: time="2025-09-12T17:41:00.356527689Z" level=info msg="TearDown network for sandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" successfully" Sep 12 17:41:00.357201 containerd[1703]: time="2025-09-12T17:41:00.356551090Z" level=info msg="StopPodSandbox for \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" returns successfully" Sep 12 17:41:00.357980 kubelet[3224]: I0912 17:41:00.357650 3224 scope.go:117] "RemoveContainer" containerID="f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786" Sep 12 17:41:00.358424 containerd[1703]: time="2025-09-12T17:41:00.358383790Z" level=error msg="ContainerStatus for \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\": not found" Sep 12 17:41:00.358806 kubelet[3224]: E0912 17:41:00.358626 3224 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\": not found" containerID="f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786" Sep 12 17:41:00.358806 kubelet[3224]: I0912 17:41:00.358662 3224 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786"} err="failed to get container status \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1f44bd79e5be1438c9795c27c11df03e7aeecfbe4e48b6c6e04884b05e6c786\": not found" Sep 12 17:41:00.415432 kubelet[3224]: I0912 17:41:00.415384 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl7d8\" (UniqueName: \"kubernetes.io/projected/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-kube-api-access-cl7d8\") pod \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\" (UID: \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\") " Sep 12 17:41:00.415599 kubelet[3224]: I0912 17:41:00.415449 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-calico-apiserver-certs\") pod \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\" (UID: \"78d725d5-0368-4c83-a47c-f7b5ec0c5f89\") " Sep 12 17:41:00.422363 kubelet[3224]: I0912 17:41:00.421979 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-kube-api-access-cl7d8" (OuterVolumeSpecName: "kube-api-access-cl7d8") pod "78d725d5-0368-4c83-a47c-f7b5ec0c5f89" (UID: "78d725d5-0368-4c83-a47c-f7b5ec0c5f89"). InnerVolumeSpecName "kube-api-access-cl7d8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:41:00.422655 kubelet[3224]: I0912 17:41:00.422582 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "78d725d5-0368-4c83-a47c-f7b5ec0c5f89" (UID: "78d725d5-0368-4c83-a47c-f7b5ec0c5f89"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:41:00.516689 kubelet[3224]: I0912 17:41:00.515818 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5br\" (UniqueName: \"kubernetes.io/projected/a75fb64e-a5f0-43ca-beae-d16320e589c8-kube-api-access-7g5br\") pod \"a75fb64e-a5f0-43ca-beae-d16320e589c8\" (UID: \"a75fb64e-a5f0-43ca-beae-d16320e589c8\") " Sep 12 17:41:00.516689 kubelet[3224]: I0912 17:41:00.515895 3224 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a75fb64e-a5f0-43ca-beae-d16320e589c8-calico-apiserver-certs\") pod \"a75fb64e-a5f0-43ca-beae-d16320e589c8\" (UID: \"a75fb64e-a5f0-43ca-beae-d16320e589c8\") " Sep 12 17:41:00.516689 kubelet[3224]: I0912 17:41:00.516089 3224 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cl7d8\" (UniqueName: \"kubernetes.io/projected/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-kube-api-access-cl7d8\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:41:00.516689 kubelet[3224]: I0912 17:41:00.516128 3224 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78d725d5-0368-4c83-a47c-f7b5ec0c5f89-calico-apiserver-certs\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:41:00.522304 kubelet[3224]: I0912 17:41:00.522007 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75fb64e-a5f0-43ca-beae-d16320e589c8-kube-api-access-7g5br" (OuterVolumeSpecName: "kube-api-access-7g5br") pod "a75fb64e-a5f0-43ca-beae-d16320e589c8" (UID: "a75fb64e-a5f0-43ca-beae-d16320e589c8"). InnerVolumeSpecName "kube-api-access-7g5br". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:41:00.524303 kubelet[3224]: I0912 17:41:00.524261 3224 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75fb64e-a5f0-43ca-beae-d16320e589c8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a75fb64e-a5f0-43ca-beae-d16320e589c8" (UID: "a75fb64e-a5f0-43ca-beae-d16320e589c8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:41:00.616671 kubelet[3224]: I0912 17:41:00.616628 3224 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7g5br\" (UniqueName: \"kubernetes.io/projected/a75fb64e-a5f0-43ca-beae-d16320e589c8-kube-api-access-7g5br\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:41:00.616671 kubelet[3224]: I0912 17:41:00.616666 3224 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a75fb64e-a5f0-43ca-beae-d16320e589c8-calico-apiserver-certs\") on node \"ci-4081.3.6-a-da806c5a3d\" DevicePath \"\"" Sep 12 17:41:00.653900 systemd[1]: Removed slice kubepods-besteffort-pod78d725d5_0368_4c83_a47c_f7b5ec0c5f89.slice - libcontainer container kubepods-besteffort-pod78d725d5_0368_4c83_a47c_f7b5ec0c5f89.slice. Sep 12 17:41:00.723164 systemd[1]: run-netns-cni\x2d65cb318a\x2d6d93\x2dc23b\x2d49db\x2df42407b425c0.mount: Deactivated successfully. Sep 12 17:41:00.723310 systemd[1]: run-netns-cni\x2d5814516f\x2da9e8\x2d6d38\x2dc55d\x2d74b3df6a5a6d.mount: Deactivated successfully. Sep 12 17:41:00.723393 systemd[1]: var-lib-kubelet-pods-a75fb64e\x2da5f0\x2d43ca\x2dbeae\x2dd16320e589c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7g5br.mount: Deactivated successfully. Sep 12 17:41:00.723482 systemd[1]: var-lib-kubelet-pods-78d725d5\x2d0368\x2d4c83\x2da47c\x2df7b5ec0c5f89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcl7d8.mount: Deactivated successfully. Sep 12 17:41:00.723561 systemd[1]: var-lib-kubelet-pods-78d725d5\x2d0368\x2d4c83\x2da47c\x2df7b5ec0c5f89-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:41:00.723642 systemd[1]: var-lib-kubelet-pods-a75fb64e\x2da5f0\x2d43ca\x2dbeae\x2dd16320e589c8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:41:00.964131 containerd[1703]: time="2025-09-12T17:41:00.964080175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:00.966274 containerd[1703]: time="2025-09-12T17:41:00.966100785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:41:00.968930 containerd[1703]: time="2025-09-12T17:41:00.968552519Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:00.973257 containerd[1703]: time="2025-09-12T17:41:00.973215374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:00.974078 containerd[1703]: time="2025-09-12T17:41:00.974046719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.914761092s" Sep 12 17:41:00.974205 containerd[1703]: time="2025-09-12T17:41:00.974184626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:41:00.975620 containerd[1703]: time="2025-09-12T17:41:00.975456796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:41:00.976762 containerd[1703]: time="2025-09-12T17:41:00.976653961Z" level=info msg="CreateContainer within sandbox \"0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:41:01.012082 containerd[1703]: time="2025-09-12T17:41:01.012033794Z" level=info msg="CreateContainer within sandbox \"0e5c22ae329cfc5c16121b4a5f5290484cfffe29d5a4c0a35c4e4aa3232775b5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe\"" Sep 12 17:41:01.014900 containerd[1703]: time="2025-09-12T17:41:01.012915042Z" level=info msg="StartContainer for \"6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe\"" Sep 12 17:41:01.073232 systemd[1]: Started cri-containerd-6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe.scope - libcontainer container 6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe. Sep 12 17:41:01.135592 containerd[1703]: time="2025-09-12T17:41:01.135371631Z" level=info msg="StartContainer for \"6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe\" returns successfully" Sep 12 17:41:01.352835 systemd[1]: Removed slice kubepods-besteffort-poda75fb64e_a5f0_43ca_beae_d16320e589c8.slice - libcontainer container kubepods-besteffort-poda75fb64e_a5f0_43ca_beae_d16320e589c8.slice. Sep 12 17:41:01.353059 systemd[1]: kubepods-besteffort-poda75fb64e_a5f0_43ca_beae_d16320e589c8.slice: Consumed 1.400s CPU time. Sep 12 17:41:01.370556 kubelet[3224]: I0912 17:41:01.370101 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hhnb5" podStartSLOduration=35.010996177 podStartE2EDuration="56.370074751s" podCreationTimestamp="2025-09-12 17:40:05 +0000 UTC" firstStartedPulling="2025-09-12 17:40:39.616254115 +0000 UTC m=+53.798380970" lastFinishedPulling="2025-09-12 17:41:00.975332789 +0000 UTC m=+75.157459544" observedRunningTime="2025-09-12 17:41:01.36822845 +0000 UTC m=+75.550355205" watchObservedRunningTime="2025-09-12 17:41:01.370074751 +0000 UTC m=+75.552201506" Sep 12 17:41:01.715609 systemd[1]: run-containerd-runc-k8s.io-6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe-runc.MpMPow.mount: Deactivated successfully. Sep 12 17:41:01.935231 kubelet[3224]: I0912 17:41:01.935173 3224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d725d5-0368-4c83-a47c-f7b5ec0c5f89" path="/var/lib/kubelet/pods/78d725d5-0368-4c83-a47c-f7b5ec0c5f89/volumes" Sep 12 17:41:01.935847 kubelet[3224]: I0912 17:41:01.935801 3224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a75fb64e-a5f0-43ca-beae-d16320e589c8" path="/var/lib/kubelet/pods/a75fb64e-a5f0-43ca-beae-d16320e589c8/volumes" Sep 12 17:41:02.859961 containerd[1703]: time="2025-09-12T17:41:02.859908029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:02.862256 containerd[1703]: time="2025-09-12T17:41:02.862089548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:41:02.864803 containerd[1703]: time="2025-09-12T17:41:02.864735093Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:02.869637 containerd[1703]: time="2025-09-12T17:41:02.868839117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:41:02.869637 containerd[1703]: time="2025-09-12T17:41:02.869501953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.894007955s" Sep 12 17:41:02.869637 containerd[1703]: time="2025-09-12T17:41:02.869537555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:41:02.873070 containerd[1703]: time="2025-09-12T17:41:02.873029046Z" level=info msg="CreateContainer within sandbox \"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:41:02.902354 containerd[1703]: time="2025-09-12T17:41:02.902299144Z" level=info msg="CreateContainer within sandbox \"1b614a472d8e8d5b0795ea3b61e5bfd1a569f9475923709e3a5b08eae08c4767\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9902476079f4e49b760193a4e15def61c3d354b2f0b7cbd7c9b5592adef263a3\"" Sep 12 17:41:02.903857 containerd[1703]: time="2025-09-12T17:41:02.903301399Z" level=info msg="StartContainer for \"9902476079f4e49b760193a4e15def61c3d354b2f0b7cbd7c9b5592adef263a3\"" Sep 12 17:41:02.943192 systemd[1]: run-containerd-runc-k8s.io-9902476079f4e49b760193a4e15def61c3d354b2f0b7cbd7c9b5592adef263a3-runc.WYlwP7.mount: Deactivated successfully. Sep 12 17:41:02.949899 systemd[1]: Started cri-containerd-9902476079f4e49b760193a4e15def61c3d354b2f0b7cbd7c9b5592adef263a3.scope - libcontainer container 9902476079f4e49b760193a4e15def61c3d354b2f0b7cbd7c9b5592adef263a3. Sep 12 17:41:02.982460 containerd[1703]: time="2025-09-12T17:41:02.982301414Z" level=info msg="StartContainer for \"9902476079f4e49b760193a4e15def61c3d354b2f0b7cbd7c9b5592adef263a3\" returns successfully" Sep 12 17:41:03.028587 kubelet[3224]: I0912 17:41:03.028539 3224 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:41:03.029147 kubelet[3224]: I0912 17:41:03.028626 3224 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:41:33.603670 kubelet[3224]: I0912 17:41:33.603268 3224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k4nvw" podStartSLOduration=61.126027517 podStartE2EDuration="1m27.603244033s" podCreationTimestamp="2025-09-12 17:40:06 +0000 UTC" firstStartedPulling="2025-09-12 17:40:36.39326199 +0000 UTC m=+50.575388845" lastFinishedPulling="2025-09-12 17:41:02.870478506 +0000 UTC m=+77.052605361" observedRunningTime="2025-09-12 17:41:03.37601152 +0000 UTC m=+77.558138275" watchObservedRunningTime="2025-09-12 17:41:33.603244033 +0000 UTC m=+107.785370888" Sep 12 17:41:47.609376 kubelet[3224]: I0912 17:41:47.609330 3224 scope.go:117] "RemoveContainer" containerID="bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106" Sep 12 17:41:47.610992 containerd[1703]: time="2025-09-12T17:41:47.610930989Z" level=info msg="RemoveContainer for \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\"" Sep 12 17:41:47.618657 containerd[1703]: time="2025-09-12T17:41:47.618608444Z" level=info msg="RemoveContainer for \"bd04eba17f4170b445eaa39876ad24d4d26162fe4a98cc3244cf6e056b0ce106\" returns successfully" Sep 12 17:41:47.620458 containerd[1703]: time="2025-09-12T17:41:47.620130294Z" level=info msg="StopPodSandbox for \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\"" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.659 [WARNING][7085] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.660 [INFO][7085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.660 [INFO][7085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" iface="eth0" netns="" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.660 [INFO][7085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.660 [INFO][7085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.694 [INFO][7092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.694 [INFO][7092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.694 [INFO][7092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.700 [WARNING][7092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.700 [INFO][7092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.702 [INFO][7092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:47.704873 containerd[1703]: 2025-09-12 17:41:47.703 [INFO][7085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.705957 containerd[1703]: time="2025-09-12T17:41:47.705717236Z" level=info msg="TearDown network for sandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" successfully" Sep 12 17:41:47.705957 containerd[1703]: time="2025-09-12T17:41:47.705788339Z" level=info msg="StopPodSandbox for \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" returns successfully" Sep 12 17:41:47.706697 containerd[1703]: time="2025-09-12T17:41:47.706589765Z" level=info msg="RemovePodSandbox for \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\"" Sep 12 17:41:47.706697 containerd[1703]: time="2025-09-12T17:41:47.706646267Z" level=info msg="Forcibly stopping sandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\"" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.773 [WARNING][7106] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.773 [INFO][7106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.773 [INFO][7106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" iface="eth0" netns="" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.773 [INFO][7106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.773 [INFO][7106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.816 [INFO][7113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.816 [INFO][7113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.817 [INFO][7113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.843 [WARNING][7113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.844 [INFO][7113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" HandleID="k8s-pod-network.4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--h9b8d-eth0" Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.845 [INFO][7113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:47.852131 containerd[1703]: 2025-09-12 17:41:47.848 [INFO][7106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79" Sep 12 17:41:47.852131 containerd[1703]: time="2025-09-12T17:41:47.852062796Z" level=info msg="TearDown network for sandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" successfully" Sep 12 17:41:47.866310 containerd[1703]: time="2025-09-12T17:41:47.865165931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:41:47.866310 containerd[1703]: time="2025-09-12T17:41:47.865303735Z" level=info msg="RemovePodSandbox \"4a8faffa4cbb449026ee7d16f1696ac8d23669d01f3e589e251254460822bc79\" returns successfully" Sep 12 17:41:47.866310 containerd[1703]: time="2025-09-12T17:41:47.865935656Z" level=info msg="StopPodSandbox for \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\"" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.934 [WARNING][7127] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.935 [INFO][7127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.935 [INFO][7127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" iface="eth0" netns="" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.935 [INFO][7127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.935 [INFO][7127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.969 [INFO][7134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.970 [INFO][7134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.970 [INFO][7134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.976 [WARNING][7134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.976 [INFO][7134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.977 [INFO][7134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:47.981018 containerd[1703]: 2025-09-12 17:41:47.979 [INFO][7127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:47.981018 containerd[1703]: time="2025-09-12T17:41:47.980810471Z" level=info msg="TearDown network for sandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" successfully" Sep 12 17:41:47.981018 containerd[1703]: time="2025-09-12T17:41:47.980842372Z" level=info msg="StopPodSandbox for \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" returns successfully" Sep 12 17:41:47.981018 containerd[1703]: time="2025-09-12T17:41:47.981494694Z" level=info msg="RemovePodSandbox for \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\"" Sep 12 17:41:47.981018 containerd[1703]: time="2025-09-12T17:41:47.981533195Z" level=info msg="Forcibly stopping sandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\"" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.030 [WARNING][7148] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" WorkloadEndpoint="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.030 [INFO][7148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.030 [INFO][7148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" iface="eth0" netns="" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.030 [INFO][7148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.030 [INFO][7148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.070 [INFO][7155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.070 [INFO][7155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.070 [INFO][7155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.085 [WARNING][7155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.085 [INFO][7155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" HandleID="k8s-pod-network.f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Workload="ci--4081.3.6--a--da806c5a3d-k8s-calico--apiserver--7f8c6cc887--x9kbt-eth0" Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.086 [INFO][7155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:41:48.092907 containerd[1703]: 2025-09-12 17:41:48.091 [INFO][7148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648" Sep 12 17:41:48.093537 containerd[1703]: time="2025-09-12T17:41:48.092909393Z" level=info msg="TearDown network for sandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" successfully" Sep 12 17:41:48.101863 containerd[1703]: time="2025-09-12T17:41:48.101810389Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:41:48.102007 containerd[1703]: time="2025-09-12T17:41:48.101903892Z" level=info msg="RemovePodSandbox \"f5b8e4e2029ac46c7d374f98e719893e4b3bced0c1a559e496c124bbda9e1648\" returns successfully" Sep 12 17:42:02.180891 systemd[1]: run-containerd-runc-k8s.io-2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb-runc.4CF89w.mount: Deactivated successfully. Sep 12 17:42:32.175115 systemd[1]: run-containerd-runc-k8s.io-2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb-runc.TzOMKc.mount: Deactivated successfully. Sep 12 17:42:37.182722 systemd[1]: Started sshd@7-10.200.4.37:22-10.200.16.10:33260.service - OpenSSH per-connection server daemon (10.200.16.10:33260). Sep 12 17:42:37.775820 sshd[7318]: Accepted publickey for core from 10.200.16.10 port 33260 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:42:37.777309 sshd[7318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:37.781333 systemd-logind[1680]: New session 10 of user core. Sep 12 17:42:37.788910 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:42:38.276070 sshd[7318]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:38.279102 systemd[1]: sshd@7-10.200.4.37:22-10.200.16.10:33260.service: Deactivated successfully. Sep 12 17:42:38.281531 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:42:38.283268 systemd-logind[1680]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:42:38.284786 systemd-logind[1680]: Removed session 10. Sep 12 17:42:43.385068 systemd[1]: Started sshd@8-10.200.4.37:22-10.200.16.10:51294.service - OpenSSH per-connection server daemon (10.200.16.10:51294). Sep 12 17:42:43.978508 sshd[7372]: Accepted publickey for core from 10.200.16.10 port 51294 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:42:43.980985 sshd[7372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:43.988815 systemd-logind[1680]: New session 11 of user core. Sep 12 17:42:43.995280 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:42:44.492273 sshd[7372]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:44.496183 systemd[1]: sshd@8-10.200.4.37:22-10.200.16.10:51294.service: Deactivated successfully. Sep 12 17:42:44.498053 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:42:44.499278 systemd-logind[1680]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:42:44.500639 systemd-logind[1680]: Removed session 11. Sep 12 17:42:49.601074 systemd[1]: Started sshd@9-10.200.4.37:22-10.200.16.10:51302.service - OpenSSH per-connection server daemon (10.200.16.10:51302). Sep 12 17:42:50.182881 sshd[7388]: Accepted publickey for core from 10.200.16.10 port 51302 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:42:50.184401 sshd[7388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:50.189559 systemd-logind[1680]: New session 12 of user core. Sep 12 17:42:50.197935 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:42:50.656692 sshd[7388]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:50.660571 systemd-logind[1680]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:42:50.661283 systemd[1]: sshd@9-10.200.4.37:22-10.200.16.10:51302.service: Deactivated successfully. Sep 12 17:42:50.663474 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:42:50.664391 systemd-logind[1680]: Removed session 12. Sep 12 17:42:50.761223 systemd[1]: Started sshd@10-10.200.4.37:22-10.200.16.10:34906.service - OpenSSH per-connection server daemon (10.200.16.10:34906). Sep 12 17:42:51.345373 sshd[7402]: Accepted publickey for core from 10.200.16.10 port 34906 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:42:51.347038 sshd[7402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:51.352076 systemd-logind[1680]: New session 13 of user core. Sep 12 17:42:51.355885 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:42:51.851412 sshd[7402]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:51.855909 systemd[1]: sshd@10-10.200.4.37:22-10.200.16.10:34906.service: Deactivated successfully. Sep 12 17:42:51.862206 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:42:51.863156 systemd-logind[1680]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:42:51.864301 systemd-logind[1680]: Removed session 13. Sep 12 17:42:51.961389 systemd[1]: Started sshd@11-10.200.4.37:22-10.200.16.10:34918.service - OpenSSH per-connection server daemon (10.200.16.10:34918). Sep 12 17:42:52.560831 sshd[7415]: Accepted publickey for core from 10.200.16.10 port 34918 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:42:52.562398 sshd[7415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:52.567003 systemd-logind[1680]: New session 14 of user core. Sep 12 17:42:52.570909 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:42:53.041232 sshd[7415]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:53.044917 systemd[1]: sshd@11-10.200.4.37:22-10.200.16.10:34918.service: Deactivated successfully. Sep 12 17:42:53.047140 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:42:53.048029 systemd-logind[1680]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:42:53.049463 systemd-logind[1680]: Removed session 14. Sep 12 17:42:58.154057 systemd[1]: Started sshd@12-10.200.4.37:22-10.200.16.10:34930.service - OpenSSH per-connection server daemon (10.200.16.10:34930). Sep 12 17:42:58.733030 sshd[7428]: Accepted publickey for core from 10.200.16.10 port 34930 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:42:58.734578 sshd[7428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:42:58.738798 systemd-logind[1680]: New session 15 of user core. Sep 12 17:42:58.749921 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:42:59.207561 sshd[7428]: pam_unix(sshd:session): session closed for user core Sep 12 17:42:59.212351 systemd-logind[1680]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:42:59.213339 systemd[1]: sshd@12-10.200.4.37:22-10.200.16.10:34930.service: Deactivated successfully. Sep 12 17:42:59.215597 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:42:59.216711 systemd-logind[1680]: Removed session 15. Sep 12 17:43:02.175421 systemd[1]: run-containerd-runc-k8s.io-2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb-runc.DXIi6H.mount: Deactivated successfully. Sep 12 17:43:04.315131 systemd[1]: Started sshd@13-10.200.4.37:22-10.200.16.10:42898.service - OpenSSH per-connection server daemon (10.200.16.10:42898). Sep 12 17:43:04.905219 sshd[7488]: Accepted publickey for core from 10.200.16.10 port 42898 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:04.906900 sshd[7488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:04.911859 systemd-logind[1680]: New session 16 of user core. Sep 12 17:43:04.913963 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:43:05.385581 sshd[7488]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:05.389349 systemd[1]: sshd@13-10.200.4.37:22-10.200.16.10:42898.service: Deactivated successfully. Sep 12 17:43:05.391992 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:43:05.394033 systemd-logind[1680]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:43:05.395500 systemd-logind[1680]: Removed session 16. Sep 12 17:43:10.505050 systemd[1]: Started sshd@14-10.200.4.37:22-10.200.16.10:57542.service - OpenSSH per-connection server daemon (10.200.16.10:57542). Sep 12 17:43:11.108761 sshd[7519]: Accepted publickey for core from 10.200.16.10 port 57542 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:11.111324 sshd[7519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:11.119098 systemd-logind[1680]: New session 17 of user core. Sep 12 17:43:11.124655 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:43:11.638504 sshd[7519]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:11.642118 systemd[1]: sshd@14-10.200.4.37:22-10.200.16.10:57542.service: Deactivated successfully. Sep 12 17:43:11.647278 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:43:11.651358 systemd-logind[1680]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:43:11.655207 systemd-logind[1680]: Removed session 17. Sep 12 17:43:16.745040 systemd[1]: Started sshd@15-10.200.4.37:22-10.200.16.10:57546.service - OpenSSH per-connection server daemon (10.200.16.10:57546). Sep 12 17:43:17.326493 sshd[7538]: Accepted publickey for core from 10.200.16.10 port 57546 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:17.328099 sshd[7538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:17.333068 systemd-logind[1680]: New session 18 of user core. Sep 12 17:43:17.337910 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:43:17.803232 sshd[7538]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:17.807795 systemd[1]: sshd@15-10.200.4.37:22-10.200.16.10:57546.service: Deactivated successfully. Sep 12 17:43:17.810699 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:43:17.811480 systemd-logind[1680]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:43:17.812876 systemd-logind[1680]: Removed session 18. Sep 12 17:43:17.911086 systemd[1]: Started sshd@16-10.200.4.37:22-10.200.16.10:57558.service - OpenSSH per-connection server daemon (10.200.16.10:57558). Sep 12 17:43:18.502489 sshd[7552]: Accepted publickey for core from 10.200.16.10 port 57558 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:18.504024 sshd[7552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:18.508804 systemd-logind[1680]: New session 19 of user core. Sep 12 17:43:18.511906 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:43:19.002770 sshd[7552]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:19.007296 systemd-logind[1680]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:43:19.008334 systemd[1]: sshd@16-10.200.4.37:22-10.200.16.10:57558.service: Deactivated successfully. Sep 12 17:43:19.010674 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:43:19.011687 systemd-logind[1680]: Removed session 19. Sep 12 17:43:19.111053 systemd[1]: Started sshd@17-10.200.4.37:22-10.200.16.10:57560.service - OpenSSH per-connection server daemon (10.200.16.10:57560). Sep 12 17:43:19.691784 sshd[7562]: Accepted publickey for core from 10.200.16.10 port 57560 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:19.692488 sshd[7562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:19.696857 systemd-logind[1680]: New session 20 of user core. Sep 12 17:43:19.701908 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:43:20.663734 sshd[7562]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:20.668117 systemd[1]: sshd@17-10.200.4.37:22-10.200.16.10:57560.service: Deactivated successfully. Sep 12 17:43:20.670800 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:43:20.671961 systemd-logind[1680]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:43:20.673328 systemd-logind[1680]: Removed session 20. Sep 12 17:43:20.772059 systemd[1]: Started sshd@18-10.200.4.37:22-10.200.16.10:42976.service - OpenSSH per-connection server daemon (10.200.16.10:42976). Sep 12 17:43:21.352184 sshd[7580]: Accepted publickey for core from 10.200.16.10 port 42976 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:21.353688 sshd[7580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:21.360267 systemd-logind[1680]: New session 21 of user core. Sep 12 17:43:21.362226 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:43:21.934220 sshd[7580]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:21.942202 systemd[1]: sshd@18-10.200.4.37:22-10.200.16.10:42976.service: Deactivated successfully. Sep 12 17:43:21.943954 systemd-logind[1680]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:43:21.946112 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:43:21.949301 systemd-logind[1680]: Removed session 21. Sep 12 17:43:22.047331 systemd[1]: Started sshd@19-10.200.4.37:22-10.200.16.10:42984.service - OpenSSH per-connection server daemon (10.200.16.10:42984). Sep 12 17:43:22.628340 sshd[7596]: Accepted publickey for core from 10.200.16.10 port 42984 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:22.630016 sshd[7596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:22.635524 systemd-logind[1680]: New session 22 of user core. Sep 12 17:43:22.639914 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:43:23.099914 sshd[7596]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:23.103894 systemd-logind[1680]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:43:23.104590 systemd[1]: sshd@19-10.200.4.37:22-10.200.16.10:42984.service: Deactivated successfully. Sep 12 17:43:23.107351 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:43:23.108441 systemd-logind[1680]: Removed session 22. Sep 12 17:43:28.212059 systemd[1]: Started sshd@20-10.200.4.37:22-10.200.16.10:42990.service - OpenSSH per-connection server daemon (10.200.16.10:42990). Sep 12 17:43:28.801690 sshd[7609]: Accepted publickey for core from 10.200.16.10 port 42990 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:28.803257 sshd[7609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:28.807381 systemd-logind[1680]: New session 23 of user core. Sep 12 17:43:28.813918 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:43:29.281822 sshd[7609]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:29.284635 systemd[1]: sshd@20-10.200.4.37:22-10.200.16.10:42990.service: Deactivated successfully. Sep 12 17:43:29.287516 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:43:29.289632 systemd-logind[1680]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:43:29.291064 systemd-logind[1680]: Removed session 23. Sep 12 17:43:33.381062 systemd[1]: run-containerd-runc-k8s.io-6cbb98e1ffd6ec6bf10db7c8e5007500d6f3625d9238d22ab8876b2933f4c5fe-runc.zoHasF.mount: Deactivated successfully. Sep 12 17:43:34.400856 systemd[1]: Started sshd@21-10.200.4.37:22-10.200.16.10:35130.service - OpenSSH per-connection server daemon (10.200.16.10:35130). Sep 12 17:43:34.998862 sshd[7686]: Accepted publickey for core from 10.200.16.10 port 35130 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:35.000883 sshd[7686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:35.011468 systemd-logind[1680]: New session 24 of user core. Sep 12 17:43:35.017300 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:43:35.522014 sshd[7686]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:35.530140 systemd[1]: sshd@21-10.200.4.37:22-10.200.16.10:35130.service: Deactivated successfully. Sep 12 17:43:35.533416 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:43:35.535528 systemd-logind[1680]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:43:35.538208 systemd-logind[1680]: Removed session 24. Sep 12 17:43:40.248986 systemd[1]: run-containerd-runc-k8s.io-ffdd88686aa78291eb743c50294ae59ecb78f7df623800f4f88cce8f705cb717-runc.uzyXxE.mount: Deactivated successfully. Sep 12 17:43:40.632077 systemd[1]: Started sshd@22-10.200.4.37:22-10.200.16.10:48568.service - OpenSSH per-connection server daemon (10.200.16.10:48568). Sep 12 17:43:41.218360 sshd[7755]: Accepted publickey for core from 10.200.16.10 port 48568 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:41.220220 sshd[7755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:41.225200 systemd-logind[1680]: New session 25 of user core. Sep 12 17:43:41.231903 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:43:41.764009 sshd[7755]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:41.769237 systemd-logind[1680]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:43:41.771902 systemd[1]: sshd@22-10.200.4.37:22-10.200.16.10:48568.service: Deactivated successfully. Sep 12 17:43:41.775290 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:43:41.779398 systemd-logind[1680]: Removed session 25. Sep 12 17:43:46.868262 systemd[1]: Started sshd@23-10.200.4.37:22-10.200.16.10:48572.service - OpenSSH per-connection server daemon (10.200.16.10:48572). Sep 12 17:43:47.459875 sshd[7777]: Accepted publickey for core from 10.200.16.10 port 48572 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:47.461449 sshd[7777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:47.465797 systemd-logind[1680]: New session 26 of user core. Sep 12 17:43:47.470896 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:43:47.932044 sshd[7777]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:47.937391 systemd-logind[1680]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:43:47.938242 systemd[1]: sshd@23-10.200.4.37:22-10.200.16.10:48572.service: Deactivated successfully. Sep 12 17:43:47.940727 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:43:47.942849 systemd-logind[1680]: Removed session 26. Sep 12 17:43:53.043133 systemd[1]: Started sshd@24-10.200.4.37:22-10.200.16.10:50990.service - OpenSSH per-connection server daemon (10.200.16.10:50990). Sep 12 17:43:53.627931 sshd[7792]: Accepted publickey for core from 10.200.16.10 port 50990 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:53.629548 sshd[7792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:53.634305 systemd-logind[1680]: New session 27 of user core. Sep 12 17:43:53.639904 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:43:54.109149 sshd[7792]: pam_unix(sshd:session): session closed for user core Sep 12 17:43:54.113481 systemd[1]: sshd@24-10.200.4.37:22-10.200.16.10:50990.service: Deactivated successfully. Sep 12 17:43:54.115631 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:43:54.116429 systemd-logind[1680]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:43:54.118123 systemd-logind[1680]: Removed session 27. Sep 12 17:43:59.220092 systemd[1]: Started sshd@25-10.200.4.37:22-10.200.16.10:51002.service - OpenSSH per-connection server daemon (10.200.16.10:51002). Sep 12 17:43:59.805070 sshd[7805]: Accepted publickey for core from 10.200.16.10 port 51002 ssh2: RSA SHA256:1wIl7LUy1+R3XqM/oP861V4U3Hgw6is7P7ZpfeXUIUY Sep 12 17:43:59.806657 sshd[7805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:43:59.811825 systemd-logind[1680]: New session 28 of user core. Sep 12 17:43:59.819909 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:44:00.278832 sshd[7805]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:00.283562 systemd[1]: sshd@25-10.200.4.37:22-10.200.16.10:51002.service: Deactivated successfully. Sep 12 17:44:00.286090 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:44:00.287320 systemd-logind[1680]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:44:00.288394 systemd-logind[1680]: Removed session 28. Sep 12 17:44:02.180049 systemd[1]: run-containerd-runc-k8s.io-2f5dbec2a1842663f1567f9332c78e259b96fa43b4df0c37e08692e90252f0eb-runc.cYpg2g.mount: Deactivated successfully.