Sep 13 00:10:17.119306 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:10:17.119341 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:17.119355 kernel: BIOS-provided physical RAM map: Sep 13 00:10:17.119365 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:10:17.119374 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 13 00:10:17.119384 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 13 00:10:17.119397 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 13 00:10:17.119410 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc4fff] reserved Sep 13 00:10:17.119421 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd2fff] usable Sep 13 00:10:17.119431 kernel: BIOS-e820: [mem 0x000000003ffd3000-0x000000003fffafff] ACPI data Sep 13 00:10:17.119442 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 13 00:10:17.119452 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 13 00:10:17.119463 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 13 00:10:17.119474 kernel: printk: bootconsole [earlyser0] enabled Sep 13 00:10:17.119490 kernel: NX (Execute Disable) protection: active Sep 13 00:10:17.119502 kernel: APIC: Static calls initialized Sep 13 00:10:17.119513 kernel: efi: EFI v2.7 by Microsoft Sep 13 00:10:17.119525 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f339a98 Sep 13 00:10:17.119537 kernel: SMBIOS 3.1.0 present. Sep 13 00:10:17.119549 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 00:10:17.119573 kernel: Hypervisor detected: Microsoft Hyper-V Sep 13 00:10:17.119583 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 13 00:10:17.119594 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 13 00:10:17.119604 kernel: Hyper-V: Nested features: 0x1e0101 Sep 13 00:10:17.119620 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 13 00:10:17.119632 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 13 00:10:17.119643 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:10:17.119652 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:10:17.119660 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 13 00:10:17.119668 kernel: tsc: Detected 2593.909 MHz processor Sep 13 00:10:17.119676 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:10:17.119687 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:10:17.119705 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 13 00:10:17.119721 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:10:17.119733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:10:17.119747 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 13 00:10:17.119759 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 13 00:10:17.119771 kernel: Using GB pages for direct mapping Sep 13 00:10:17.119784 kernel: Secure boot disabled Sep 13 00:10:17.119798 kernel: ACPI: Early table checksum verification disabled Sep 13 00:10:17.119818 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 13 00:10:17.119836 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119850 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119865 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 00:10:17.119880 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 13 00:10:17.119895 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119909 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119926 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119941 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119956 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119971 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119985 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 13 00:10:17.120000 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Sep 13 00:10:17.120015 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 13 00:10:17.120029 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 13 00:10:17.120045 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 13 00:10:17.120063 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 13 00:10:17.120075 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 13 00:10:17.120088 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 13 00:10:17.120102 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 13 00:10:17.120114 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:10:17.120126 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:10:17.120138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 13 00:10:17.120152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 13 00:10:17.120164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 13 00:10:17.120180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 13 00:10:17.120194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 13 00:10:17.120207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 13 00:10:17.120220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 13 00:10:17.120233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 13 00:10:17.120246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 13 00:10:17.120259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 13 00:10:17.120271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 13 00:10:17.120291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 13 00:10:17.120307 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 13 00:10:17.120333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 13 00:10:17.120355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 13 00:10:17.120369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 13 00:10:17.120382 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 13 00:10:17.120396 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 13 00:10:17.120410 kernel: Zone ranges: Sep 13 00:10:17.120424 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:10:17.120441 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:10:17.120455 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:10:17.120469 kernel: Movable zone start for each node Sep 13 00:10:17.120483 kernel: Early memory node ranges Sep 13 00:10:17.120496 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:10:17.120510 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 13 00:10:17.120524 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd2fff] Sep 13 00:10:17.120537 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 13 00:10:17.120562 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:10:17.120587 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 13 00:10:17.120601 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:10:17.120615 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:10:17.120628 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Sep 13 00:10:17.120642 kernel: On node 0, zone DMA32: 44 pages in unavailable ranges Sep 13 00:10:17.120656 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 13 00:10:17.120669 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 13 00:10:17.120683 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:10:17.120697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:10:17.120714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:10:17.120727 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 13 00:10:17.120741 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:10:17.120755 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 13 00:10:17.120768 kernel: Booting paravirtualized kernel on Hyper-V Sep 13 00:10:17.120782 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:10:17.120796 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:10:17.120810 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:10:17.120824 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:10:17.120840 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:10:17.120853 kernel: Hyper-V: PV spinlocks enabled Sep 13 00:10:17.120867 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:10:17.120883 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:17.120897 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:10:17.120910 kernel: random: crng init done Sep 13 00:10:17.120924 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:10:17.120938 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:10:17.120955 kernel: Fallback order for Node 0: 0 Sep 13 00:10:17.120979 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062376 Sep 13 00:10:17.120993 kernel: Policy zone: Normal Sep 13 00:10:17.121011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:10:17.121025 kernel: software IO TLB: area num 2. Sep 13 00:10:17.121040 kernel: Memory: 8074604K/8387516K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 312652K reserved, 0K cma-reserved) Sep 13 00:10:17.121055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:10:17.121069 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:10:17.121084 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:10:17.121098 kernel: Dynamic Preempt: voluntary Sep 13 00:10:17.121113 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:10:17.121131 kernel: rcu: RCU event tracing is enabled. Sep 13 00:10:17.121146 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:10:17.121161 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:10:17.121176 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:10:17.121191 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:10:17.121206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:10:17.121224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:10:17.121238 kernel: Using NULL legacy PIC Sep 13 00:10:17.121253 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 13 00:10:17.121267 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:10:17.121282 kernel: Console: colour dummy device 80x25 Sep 13 00:10:17.121297 kernel: printk: console [tty1] enabled Sep 13 00:10:17.121312 kernel: printk: console [ttyS0] enabled Sep 13 00:10:17.121326 kernel: printk: bootconsole [earlyser0] disabled Sep 13 00:10:17.121341 kernel: ACPI: Core revision 20230628 Sep 13 00:10:17.121358 kernel: Failed to register legacy timer interrupt Sep 13 00:10:17.121373 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:10:17.121387 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 13 00:10:17.121402 kernel: Hyper-V: Using IPI hypercalls Sep 13 00:10:17.121417 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 13 00:10:17.121432 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 13 00:10:17.121447 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 13 00:10:17.121462 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 13 00:10:17.121476 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 13 00:10:17.121494 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 13 00:10:17.121509 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) Sep 13 00:10:17.121524 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:10:17.121539 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:10:17.121567 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:10:17.121582 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:10:17.121597 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:10:17.121611 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:10:17.121626 kernel: RETBleed: Vulnerable Sep 13 00:10:17.121640 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:10:17.121658 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:10:17.121673 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:10:17.121688 kernel: active return thunk: its_return_thunk Sep 13 00:10:17.121702 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:10:17.121716 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:10:17.121731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:10:17.121746 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:10:17.121760 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:10:17.121775 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:10:17.121789 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:10:17.121807 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:10:17.121821 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 13 00:10:17.121836 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 13 00:10:17.121850 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 13 00:10:17.121865 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 13 00:10:17.121879 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:10:17.121894 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:10:17.121908 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:10:17.121923 kernel: landlock: Up and running. Sep 13 00:10:17.121937 kernel: SELinux: Initializing. Sep 13 00:10:17.121952 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.121966 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.121983 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:10:17.122002 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:17.122030 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:17.122059 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:17.122072 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:10:17.122085 kernel: signal: max sigframe size: 3632 Sep 13 00:10:17.122099 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:10:17.122114 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:10:17.122128 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:10:17.122144 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:10:17.122156 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:10:17.122170 kernel: .... node #0, CPUs: #1 Sep 13 00:10:17.122185 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 13 00:10:17.122201 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:10:17.122213 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:10:17.122225 kernel: smpboot: Max logical packages: 1 Sep 13 00:10:17.122240 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Sep 13 00:10:17.122256 kernel: devtmpfs: initialized Sep 13 00:10:17.122273 kernel: x86/mm: Memory block size: 128MB Sep 13 00:10:17.122291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 13 00:10:17.122304 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:10:17.122317 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:10:17.122332 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:10:17.122346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:10:17.122359 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:10:17.122372 kernel: audit: type=2000 audit(1757722215.029:1): state=initialized audit_enabled=0 res=1 Sep 13 00:10:17.122390 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:10:17.122404 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:10:17.122419 kernel: cpuidle: using governor menu Sep 13 00:10:17.122433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:10:17.122448 kernel: dca service started, version 1.12.1 Sep 13 00:10:17.122463 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 13 00:10:17.122478 kernel: e820: reserve RAM buffer [mem 0x3ffd3000-0x3fffffff] Sep 13 00:10:17.122493 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:10:17.122508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:10:17.122527 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:10:17.122541 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:10:17.122575 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:10:17.122591 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:10:17.122606 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:10:17.122621 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:10:17.122636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:10:17.122651 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:10:17.122666 kernel: ACPI: Interpreter enabled Sep 13 00:10:17.122685 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:10:17.122700 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:10:17.122715 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:10:17.122730 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:10:17.122746 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 13 00:10:17.122760 kernel: iommu: Default domain type: Translated Sep 13 00:10:17.122776 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:10:17.122791 kernel: efivars: Registered efivars operations Sep 13 00:10:17.122804 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:10:17.122821 kernel: PCI: System does not support PCI Sep 13 00:10:17.122835 kernel: vgaarb: loaded Sep 13 00:10:17.122850 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 13 00:10:17.122865 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:10:17.122879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:10:17.122894 kernel: pnp: PnP ACPI init Sep 13 00:10:17.122909 kernel: pnp: PnP ACPI: found 3 devices Sep 13 00:10:17.122924 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:10:17.122938 kernel: NET: Registered PF_INET protocol family Sep 13 00:10:17.122956 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:10:17.122971 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:10:17.122985 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:10:17.122998 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:10:17.123013 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:10:17.123027 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:10:17.123041 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.123056 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.123069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:10:17.123087 kernel: NET: Registered PF_XDP protocol family Sep 13 00:10:17.123101 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:10:17.123115 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:10:17.123130 kernel: software IO TLB: mapped [mem 0x000000003b339000-0x000000003f339000] (64MB) Sep 13 00:10:17.123144 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:10:17.123158 kernel: Initialise system trusted keyrings Sep 13 00:10:17.123172 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:10:17.123186 kernel: Key type asymmetric registered Sep 13 00:10:17.123200 kernel: Asymmetric key parser 'x509' registered Sep 13 00:10:17.123216 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:10:17.123230 kernel: io scheduler mq-deadline registered Sep 13 00:10:17.123245 kernel: io scheduler kyber registered Sep 13 00:10:17.123259 kernel: io scheduler bfq registered Sep 13 00:10:17.123272 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:10:17.123287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:10:17.123301 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:10:17.123316 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:10:17.123330 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:10:17.123506 kernel: rtc_cmos 00:02: registered as rtc0 Sep 13 00:10:17.123648 kernel: rtc_cmos 00:02: setting system clock to 2025-09-13T00:10:16 UTC (1757722216) Sep 13 00:10:17.123774 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 13 00:10:17.123791 kernel: intel_pstate: CPU model not supported Sep 13 00:10:17.123806 kernel: efifb: probing for efifb Sep 13 00:10:17.123821 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:10:17.123834 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:10:17.123848 kernel: efifb: scrolling: redraw Sep 13 00:10:17.123867 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:10:17.123881 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:10:17.123895 kernel: fb0: EFI VGA frame buffer device Sep 13 00:10:17.123909 kernel: pstore: Using crash dump compression: deflate Sep 13 00:10:17.123922 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:10:17.123936 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:10:17.123948 kernel: Segment Routing with IPv6 Sep 13 00:10:17.123961 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:10:17.123976 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:10:17.123993 kernel: Key type dns_resolver registered Sep 13 00:10:17.124007 kernel: IPI shorthand broadcast: enabled Sep 13 00:10:17.124022 kernel: sched_clock: Marking stable (945002600, 54804200)->(1237797300, -237990500) Sep 13 00:10:17.124035 kernel: registered taskstats version 1 Sep 13 00:10:17.124049 kernel: Loading compiled-in X.509 certificates Sep 13 00:10:17.124063 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:10:17.124076 kernel: Key type .fscrypt registered Sep 13 00:10:17.124089 kernel: Key type fscrypt-provisioning registered Sep 13 00:10:17.124102 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:10:17.124119 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:10:17.124134 kernel: ima: No architecture policies found Sep 13 00:10:17.124148 kernel: clk: Disabling unused clocks Sep 13 00:10:17.124161 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:10:17.124187 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:10:17.124205 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:10:17.124218 kernel: Run /init as init process Sep 13 00:10:17.124229 kernel: with arguments: Sep 13 00:10:17.124244 kernel: /init Sep 13 00:10:17.124261 kernel: with environment: Sep 13 00:10:17.124279 kernel: HOME=/ Sep 13 00:10:17.124292 kernel: TERM=linux Sep 13 00:10:17.124304 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:10:17.124321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:10:17.124339 systemd[1]: Detected virtualization microsoft. Sep 13 00:10:17.124355 systemd[1]: Detected architecture x86-64. Sep 13 00:10:17.124374 systemd[1]: Running in initrd. Sep 13 00:10:17.124389 systemd[1]: No hostname configured, using default hostname. Sep 13 00:10:17.124404 systemd[1]: Hostname set to . Sep 13 00:10:17.124421 systemd[1]: Initializing machine ID from random generator. Sep 13 00:10:17.124436 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:10:17.124452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:17.124467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:17.124484 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:10:17.124503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:10:17.124518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:10:17.124533 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:10:17.124807 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:10:17.124828 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:10:17.124843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:17.124858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:17.124878 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:10:17.124893 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:10:17.124908 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:10:17.124922 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:10:17.124938 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:17.124953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:17.124968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:10:17.124983 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:10:17.124998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:17.125016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:17.125031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:17.125046 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:10:17.125061 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:10:17.125076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:10:17.125091 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:10:17.125106 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:10:17.125121 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:10:17.125136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:10:17.125175 systemd-journald[176]: Collecting audit messages is disabled. Sep 13 00:10:17.125208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:17.125223 systemd-journald[176]: Journal started Sep 13 00:10:17.125259 systemd-journald[176]: Runtime Journal (/run/log/journal/2d8fcf731cc947989c83285752018a4c) is 8.0M, max 158.8M, 150.8M free. Sep 13 00:10:17.122306 systemd-modules-load[177]: Inserted module 'overlay' Sep 13 00:10:17.133096 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:10:17.137148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:17.140821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:17.144640 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:10:17.175256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:10:17.166762 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:10:17.181498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:10:17.187592 kernel: Bridge firewalling registered Sep 13 00:10:17.190285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:17.193472 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 13 00:10:17.203369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:17.207075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:17.217511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:17.227732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:17.233680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:10:17.238089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:10:17.256136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:17.264545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:17.267984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:17.280793 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:10:17.288720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:10:17.303581 dracut-cmdline[212]: dracut-dracut-053 Sep 13 00:10:17.308591 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:17.327583 systemd-resolved[213]: Positive Trust Anchors: Sep 13 00:10:17.330226 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:17.334655 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:10:17.347837 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 13 00:10:17.348754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:10:17.356472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:17.400575 kernel: SCSI subsystem initialized Sep 13 00:10:17.412571 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:10:17.423578 kernel: iscsi: registered transport (tcp) Sep 13 00:10:17.444741 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:10:17.444803 kernel: QLogic iSCSI HBA Driver Sep 13 00:10:17.480190 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:10:17.490738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:10:17.519884 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:10:17.519963 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:10:17.524573 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:10:17.564580 kernel: raid6: avx512x4 gen() 18451 MB/s Sep 13 00:10:17.583567 kernel: raid6: avx512x2 gen() 18619 MB/s Sep 13 00:10:17.602566 kernel: raid6: avx512x1 gen() 17730 MB/s Sep 13 00:10:17.622568 kernel: raid6: avx2x4 gen() 18577 MB/s Sep 13 00:10:17.641564 kernel: raid6: avx2x2 gen() 18534 MB/s Sep 13 00:10:17.662133 kernel: raid6: avx2x1 gen() 14081 MB/s Sep 13 00:10:17.662163 kernel: raid6: using algorithm avx512x2 gen() 18619 MB/s Sep 13 00:10:17.684905 kernel: raid6: .... xor() 30454 MB/s, rmw enabled Sep 13 00:10:17.684945 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:10:17.707569 kernel: xor: automatically using best checksumming function avx Sep 13 00:10:17.855580 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:10:17.865474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:10:17.875746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:17.893331 systemd-udevd[396]: Using default interface naming scheme 'v255'. Sep 13 00:10:17.900221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:17.913776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:10:17.927249 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 13 00:10:17.956640 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:10:17.965705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:10:18.011919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:18.023740 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:10:18.054081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:10:18.062997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:10:18.071564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:18.078643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:10:18.089761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:10:18.111491 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:10:18.118104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:10:18.145572 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:10:18.145618 kernel: AES CTR mode by8 optimization enabled Sep 13 00:10:18.146577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:18.146799 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:18.159277 kernel: hv_vmbus: Vmbus version:5.2 Sep 13 00:10:18.160093 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:18.163626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:18.163880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:18.167068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:18.190888 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:10:18.201790 kernel: scsi host0: storvsc_host_t Sep 13 00:10:18.201857 kernel: scsi host1: storvsc_host_t Sep 13 00:10:18.195311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:18.217550 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:10:18.217695 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:10:18.217787 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:10:18.221654 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:10:18.221694 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:10:18.234584 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 00:10:18.241580 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 13 00:10:18.248607 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:10:18.254243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:18.270631 kernel: PTP clock support registered Sep 13 00:10:18.266768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:18.291010 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:10:18.291057 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:10:18.299780 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:10:18.299824 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:10:18.299847 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:10:18.300569 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:10:18.734451 systemd-resolved[213]: Clock change detected. Flushing caches. Sep 13 00:10:18.748015 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 00:10:18.748045 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:10:18.748219 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:10:18.748348 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:10:18.760480 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:10:18.762542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:18.784494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:10:18.791943 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:10:18.792274 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:10:18.795333 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:10:18.795556 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:10:18.800480 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:10:18.809031 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:10:18.809314 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:10:18.819485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:10:18.866389 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: VF slot 1 added Sep 13 00:10:18.874708 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:10:18.879489 kernel: hv_pci a6b1ef4f-da14-47de-970c-b98603098d18: PCI VMBus probing: Using version 0x10004 Sep 13 00:10:18.886705 kernel: hv_pci a6b1ef4f-da14-47de-970c-b98603098d18: PCI host bridge to bus da14:00 Sep 13 00:10:18.886978 kernel: pci_bus da14:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 13 00:10:18.890407 kernel: pci_bus da14:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:10:18.895496 kernel: pci da14:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 13 00:10:18.901505 kernel: pci da14:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:10:18.905483 kernel: pci da14:00:02.0: enabling Extended Tags Sep 13 00:10:18.918483 kernel: pci da14:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at da14:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 13 00:10:18.925505 kernel: pci_bus da14:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:10:18.925797 kernel: pci da14:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:10:19.092821 kernel: mlx5_core da14:00:02.0: enabling device (0000 -> 0002) Sep 13 00:10:19.097487 kernel: mlx5_core da14:00:02.0: firmware version: 14.30.5000 Sep 13 00:10:19.287494 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Sep 13 00:10:19.305804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:10:19.336655 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (467) Sep 13 00:10:19.336715 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: VF registering: eth1 Sep 13 00:10:19.338145 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 13 00:10:19.351908 kernel: mlx5_core da14:00:02.0 eth1: joined to eth0 Sep 13 00:10:19.352126 kernel: mlx5_core da14:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:10:19.362489 kernel: mlx5_core da14:00:02.0 enP55828s1: renamed from eth1 Sep 13 00:10:19.364349 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 13 00:10:19.373607 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 13 00:10:19.391603 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:10:19.432391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 13 00:10:20.417441 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:10:20.417871 disk-uuid[599]: The operation has completed successfully. Sep 13 00:10:20.497314 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:10:20.497432 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:10:20.525660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:10:20.533104 sh[717]: Success Sep 13 00:10:20.556489 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:10:20.822906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:10:20.839409 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:10:20.845334 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:10:20.863485 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:10:20.863531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:20.870175 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:10:20.873392 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:10:20.876578 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:10:21.213453 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:10:21.215394 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:10:21.229697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:10:21.235795 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:10:21.262483 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:21.262535 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:21.266159 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:10:21.323519 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:10:21.331128 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:10:21.346703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:10:21.349842 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:21.354352 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:10:21.361710 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:10:21.374691 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:10:21.384872 systemd-networkd[897]: lo: Link UP Sep 13 00:10:21.384882 systemd-networkd[897]: lo: Gained carrier Sep 13 00:10:21.386778 systemd-networkd[897]: Enumeration completed Sep 13 00:10:21.387556 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:21.387562 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:21.392538 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:10:21.397186 systemd[1]: Reached target network.target - Network. Sep 13 00:10:21.454484 kernel: mlx5_core da14:00:02.0 enP55828s1: Link up Sep 13 00:10:21.454744 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:10:21.492570 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: Data path switched to VF: enP55828s1 Sep 13 00:10:21.492891 systemd-networkd[897]: enP55828s1: Link UP Sep 13 00:10:21.493018 systemd-networkd[897]: eth0: Link UP Sep 13 00:10:21.493206 systemd-networkd[897]: eth0: Gained carrier Sep 13 00:10:21.493219 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:21.497363 systemd-networkd[897]: enP55828s1: Gained carrier Sep 13 00:10:21.532501 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:10:22.574680 systemd-networkd[897]: eth0: Gained IPv6LL Sep 13 00:10:22.616227 ignition[901]: Ignition 2.19.0 Sep 13 00:10:22.616240 ignition[901]: Stage: fetch-offline Sep 13 00:10:22.616282 ignition[901]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.616296 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.616408 ignition[901]: parsed url from cmdline: "" Sep 13 00:10:22.616412 ignition[901]: no config URL provided Sep 13 00:10:22.616419 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:10:22.616430 ignition[901]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:10:22.616438 ignition[901]: failed to fetch config: resource requires networking Sep 13 00:10:22.633128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:10:22.616791 ignition[901]: Ignition finished successfully Sep 13 00:10:22.648568 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:10:22.664413 ignition[910]: Ignition 2.19.0 Sep 13 00:10:22.664424 ignition[910]: Stage: fetch Sep 13 00:10:22.664663 ignition[910]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.664673 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.666459 ignition[910]: parsed url from cmdline: "" Sep 13 00:10:22.666472 ignition[910]: no config URL provided Sep 13 00:10:22.666516 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:10:22.666528 ignition[910]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:10:22.666552 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:10:22.754407 ignition[910]: GET result: OK Sep 13 00:10:22.754695 ignition[910]: config has been read from IMDS userdata Sep 13 00:10:22.754768 ignition[910]: parsing config with SHA512: cff578b6b698b86bb454de5d76c86606a0a7632b856d449343ac297da33eaef23f2166365432f9727c8532c4ff4d6be00b316de0c96099181f703988fd656a37 Sep 13 00:10:22.762852 unknown[910]: fetched base config from "system" Sep 13 00:10:22.763200 ignition[910]: fetch: fetch complete Sep 13 00:10:22.762867 unknown[910]: fetched base config from "system" Sep 13 00:10:22.763204 ignition[910]: fetch: fetch passed Sep 13 00:10:22.762875 unknown[910]: fetched user config from "azure" Sep 13 00:10:22.763240 ignition[910]: Ignition finished successfully Sep 13 00:10:22.766042 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:10:22.777802 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:10:22.803064 ignition[916]: Ignition 2.19.0 Sep 13 00:10:22.803080 ignition[916]: Stage: kargs Sep 13 00:10:22.806236 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:10:22.803296 ignition[916]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.803309 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.804527 ignition[916]: kargs: kargs passed Sep 13 00:10:22.818576 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:10:22.804567 ignition[916]: Ignition finished successfully Sep 13 00:10:22.832044 ignition[922]: Ignition 2.19.0 Sep 13 00:10:22.832056 ignition[922]: Stage: disks Sep 13 00:10:22.834210 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:10:22.832256 ignition[922]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.838842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:10:22.832269 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.846785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:10:22.833109 ignition[922]: disks: disks passed Sep 13 00:10:22.833148 ignition[922]: Ignition finished successfully Sep 13 00:10:22.865180 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:10:22.868031 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:10:22.876958 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:10:22.888663 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:10:22.937138 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 13 00:10:22.944092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:10:22.957637 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:10:23.053480 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:10:23.053979 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:10:23.059048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:10:23.095565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:10:23.113485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Sep 13 00:10:23.118481 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:23.118520 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:23.124918 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:10:23.128639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:10:23.137659 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:10:23.138660 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:10:23.143046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:10:23.143077 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:10:23.149190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:10:23.149312 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:10:23.178620 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:10:23.839719 coreos-metadata[956]: Sep 13 00:10:23.839 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:10:23.846655 coreos-metadata[956]: Sep 13 00:10:23.846 INFO Fetch successful Sep 13 00:10:23.850013 coreos-metadata[956]: Sep 13 00:10:23.847 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:10:23.856820 coreos-metadata[956]: Sep 13 00:10:23.856 INFO Fetch successful Sep 13 00:10:23.870061 coreos-metadata[956]: Sep 13 00:10:23.870 INFO wrote hostname ci-4081.3.5-n-78cb87e672 to /sysroot/etc/hostname Sep 13 00:10:23.871789 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:10:23.886168 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:10:23.912834 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:10:23.935060 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:10:23.949785 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:10:25.270325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:10:25.279574 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:10:25.293498 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:25.293631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:10:25.298250 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:10:25.327366 ignition[1061]: INFO : Ignition 2.19.0 Sep 13 00:10:25.327366 ignition[1061]: INFO : Stage: mount Sep 13 00:10:25.332542 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:25.332542 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:25.336517 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:10:25.342254 ignition[1061]: INFO : mount: mount passed Sep 13 00:10:25.342254 ignition[1061]: INFO : Ignition finished successfully Sep 13 00:10:25.348362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:10:25.361601 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:10:25.370312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:10:25.400485 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Sep 13 00:10:25.407618 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:25.407682 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:25.410562 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:10:25.418487 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:10:25.420334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:10:25.448821 ignition[1089]: INFO : Ignition 2.19.0 Sep 13 00:10:25.451417 ignition[1089]: INFO : Stage: files Sep 13 00:10:25.451417 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:25.451417 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:25.460253 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:10:25.467378 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:10:25.467378 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:10:25.532289 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:10:25.537014 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:10:25.537014 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:10:25.532734 unknown[1089]: wrote ssh authorized keys file for user: core Sep 13 00:10:25.563645 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:10:25.569396 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:10:25.626447 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:10:25.734179 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:10:25.740064 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:25.740064 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:25.740064 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:25.755719 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:25.755719 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:25.765886 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:25.765886 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:25.776192 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:25.781372 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:10:26.121658 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:10:26.444572 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:26.444572 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:10:26.478776 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: files passed Sep 13 00:10:26.484528 ignition[1089]: INFO : Ignition finished successfully Sep 13 00:10:26.480651 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:10:26.519956 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:10:26.525612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:10:26.533010 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:10:26.533124 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:10:26.549612 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:26.549612 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:26.548130 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:10:26.568642 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:26.553294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:10:26.576700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:10:26.600397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:10:26.600529 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:10:26.606918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:10:26.612638 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:10:26.618144 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:10:26.626654 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:10:26.641254 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:10:26.656605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:10:26.668471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:26.675536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:26.679158 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:10:26.687684 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:10:26.687826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:10:26.697802 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:10:26.700909 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:10:26.703588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:10:26.709353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:10:26.715407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:10:26.721717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:10:26.728045 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:10:26.734902 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:10:26.744068 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:10:26.752519 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:10:26.754774 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:10:26.754893 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:10:26.760842 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:26.766314 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:26.772674 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:10:26.775572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:26.786514 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:10:26.786635 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:10:26.792747 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:10:26.792892 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:10:26.804988 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:10:26.805093 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:10:26.817159 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:10:26.817330 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:10:26.831661 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:10:26.839606 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:10:26.843685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:10:26.849381 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:26.853034 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:10:26.866016 ignition[1142]: INFO : Ignition 2.19.0 Sep 13 00:10:26.866016 ignition[1142]: INFO : Stage: umount Sep 13 00:10:26.866016 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:26.866016 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:26.866016 ignition[1142]: INFO : umount: umount passed Sep 13 00:10:26.866016 ignition[1142]: INFO : Ignition finished successfully Sep 13 00:10:26.853223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:10:26.867778 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:10:26.867896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:10:26.874152 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:10:26.874405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:10:26.885580 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:10:26.885634 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:10:26.892868 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:10:26.895861 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:10:26.901551 systemd[1]: Stopped target network.target - Network. Sep 13 00:10:26.906968 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:10:26.907040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:10:26.912874 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:10:26.918517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:10:26.921191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:26.924913 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:10:26.927529 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:10:26.930351 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:10:26.930397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:26.936215 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:10:26.936264 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:26.941838 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:10:26.941899 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:10:26.949962 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:10:26.952825 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:26.964292 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:10:26.974062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:10:26.978736 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:10:26.979423 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:10:26.980498 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:10:26.983355 systemd-networkd[897]: eth0: DHCPv6 lease lost Sep 13 00:10:26.985728 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:10:26.985838 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:10:27.015592 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:10:27.015711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:10:27.023331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:10:27.023443 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:27.041985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:10:27.044719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:10:27.044779 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:10:27.051737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:10:27.051784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:27.058740 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:10:27.058790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:27.067901 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:10:27.067961 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:27.075248 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:27.105489 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:10:27.105632 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:27.114308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:10:27.114395 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:27.119043 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:10:27.119090 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:27.126136 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:10:27.126192 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:10:27.131764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:10:27.131814 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:10:27.160775 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: Data path switched from VF: enP55828s1 Sep 13 00:10:27.137865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:27.137915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:27.149681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:10:27.164889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:10:27.164951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:27.170515 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:10:27.170574 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:27.177358 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:10:27.177413 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:27.183876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:27.183930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:27.190341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:10:27.190492 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:10:27.204639 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:10:27.204994 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:10:27.467932 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:10:27.468066 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:10:27.474079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:10:27.475156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:10:27.475210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:10:27.495689 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:10:27.874326 systemd[1]: Switching root. Sep 13 00:10:27.951705 systemd-journald[176]: Journal stopped Sep 13 00:10:17.119306 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:10:17.119341 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:17.119355 kernel: BIOS-provided physical RAM map: Sep 13 00:10:17.119365 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:10:17.119374 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 13 00:10:17.119384 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 13 00:10:17.119397 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 13 00:10:17.119410 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc4fff] reserved Sep 13 00:10:17.119421 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd2fff] usable Sep 13 00:10:17.119431 kernel: BIOS-e820: [mem 0x000000003ffd3000-0x000000003fffafff] ACPI data Sep 13 00:10:17.119442 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 13 00:10:17.119452 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 13 00:10:17.119463 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 13 00:10:17.119474 kernel: printk: bootconsole [earlyser0] enabled Sep 13 00:10:17.119490 kernel: NX (Execute Disable) protection: active Sep 13 00:10:17.119502 kernel: APIC: Static calls initialized Sep 13 00:10:17.119513 kernel: efi: EFI v2.7 by Microsoft Sep 13 00:10:17.119525 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f339a98 Sep 13 00:10:17.119537 kernel: SMBIOS 3.1.0 present. Sep 13 00:10:17.119549 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 00:10:17.119573 kernel: Hypervisor detected: Microsoft Hyper-V Sep 13 00:10:17.119583 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 13 00:10:17.119594 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 13 00:10:17.119604 kernel: Hyper-V: Nested features: 0x1e0101 Sep 13 00:10:17.119620 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 13 00:10:17.119632 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 13 00:10:17.119643 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:10:17.119652 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:10:17.119660 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 13 00:10:17.119668 kernel: tsc: Detected 2593.909 MHz processor Sep 13 00:10:17.119676 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:10:17.119687 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:10:17.119705 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 13 00:10:17.119721 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:10:17.119733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:10:17.119747 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 13 00:10:17.119759 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 13 00:10:17.119771 kernel: Using GB pages for direct mapping Sep 13 00:10:17.119784 kernel: Secure boot disabled Sep 13 00:10:17.119798 kernel: ACPI: Early table checksum verification disabled Sep 13 00:10:17.119818 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 13 00:10:17.119836 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119850 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119865 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 00:10:17.119880 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 13 00:10:17.119895 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119909 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119926 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119941 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119956 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119971 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:10:17.119985 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 13 00:10:17.120000 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Sep 13 00:10:17.120015 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 13 00:10:17.120029 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 13 00:10:17.120045 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 13 00:10:17.120063 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 13 00:10:17.120075 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 13 00:10:17.120088 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 13 00:10:17.120102 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 13 00:10:17.120114 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:10:17.120126 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:10:17.120138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 13 00:10:17.120152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 13 00:10:17.120164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 13 00:10:17.120180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 13 00:10:17.120194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 13 00:10:17.120207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 13 00:10:17.120220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 13 00:10:17.120233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 13 00:10:17.120246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 13 00:10:17.120259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 13 00:10:17.120271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 13 00:10:17.120291 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 13 00:10:17.120307 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 13 00:10:17.120333 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 13 00:10:17.120355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 13 00:10:17.120369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 13 00:10:17.120382 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 13 00:10:17.120396 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 13 00:10:17.120410 kernel: Zone ranges: Sep 13 00:10:17.120424 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:10:17.120441 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:10:17.120455 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:10:17.120469 kernel: Movable zone start for each node Sep 13 00:10:17.120483 kernel: Early memory node ranges Sep 13 00:10:17.120496 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:10:17.120510 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 13 00:10:17.120524 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd2fff] Sep 13 00:10:17.120537 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 13 00:10:17.120562 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:10:17.120587 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 13 00:10:17.120601 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:10:17.120615 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:10:17.120628 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Sep 13 00:10:17.120642 kernel: On node 0, zone DMA32: 44 pages in unavailable ranges Sep 13 00:10:17.120656 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 13 00:10:17.120669 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 13 00:10:17.120683 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:10:17.120697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:10:17.120714 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:10:17.120727 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 13 00:10:17.120741 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:10:17.120755 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 13 00:10:17.120768 kernel: Booting paravirtualized kernel on Hyper-V Sep 13 00:10:17.120782 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:10:17.120796 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:10:17.120810 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:10:17.120824 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:10:17.120840 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:10:17.120853 kernel: Hyper-V: PV spinlocks enabled Sep 13 00:10:17.120867 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:10:17.120883 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:17.120897 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:10:17.120910 kernel: random: crng init done Sep 13 00:10:17.120924 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:10:17.120938 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:10:17.120955 kernel: Fallback order for Node 0: 0 Sep 13 00:10:17.120979 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062376 Sep 13 00:10:17.120993 kernel: Policy zone: Normal Sep 13 00:10:17.121011 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:10:17.121025 kernel: software IO TLB: area num 2. Sep 13 00:10:17.121040 kernel: Memory: 8074604K/8387516K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 312652K reserved, 0K cma-reserved) Sep 13 00:10:17.121055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:10:17.121069 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:10:17.121084 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:10:17.121098 kernel: Dynamic Preempt: voluntary Sep 13 00:10:17.121113 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:10:17.121131 kernel: rcu: RCU event tracing is enabled. Sep 13 00:10:17.121146 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:10:17.121161 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:10:17.121176 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:10:17.121191 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:10:17.121206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:10:17.121224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:10:17.121238 kernel: Using NULL legacy PIC Sep 13 00:10:17.121253 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 13 00:10:17.121267 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:10:17.121282 kernel: Console: colour dummy device 80x25 Sep 13 00:10:17.121297 kernel: printk: console [tty1] enabled Sep 13 00:10:17.121312 kernel: printk: console [ttyS0] enabled Sep 13 00:10:17.121326 kernel: printk: bootconsole [earlyser0] disabled Sep 13 00:10:17.121341 kernel: ACPI: Core revision 20230628 Sep 13 00:10:17.121358 kernel: Failed to register legacy timer interrupt Sep 13 00:10:17.121373 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:10:17.121387 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 13 00:10:17.121402 kernel: Hyper-V: Using IPI hypercalls Sep 13 00:10:17.121417 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 13 00:10:17.121432 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 13 00:10:17.121447 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 13 00:10:17.121462 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 13 00:10:17.121476 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 13 00:10:17.121494 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 13 00:10:17.121509 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) Sep 13 00:10:17.121524 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:10:17.121539 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:10:17.121567 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:10:17.121582 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:10:17.121597 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:10:17.121611 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:10:17.121626 kernel: RETBleed: Vulnerable Sep 13 00:10:17.121640 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:10:17.121658 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:10:17.121673 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:10:17.121688 kernel: active return thunk: its_return_thunk Sep 13 00:10:17.121702 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:10:17.121716 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:10:17.121731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:10:17.121746 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:10:17.121760 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:10:17.121775 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:10:17.121789 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:10:17.121807 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:10:17.121821 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 13 00:10:17.121836 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 13 00:10:17.121850 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 13 00:10:17.121865 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 13 00:10:17.121879 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:10:17.121894 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:10:17.121908 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:10:17.121923 kernel: landlock: Up and running. Sep 13 00:10:17.121937 kernel: SELinux: Initializing. Sep 13 00:10:17.121952 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.121966 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.121983 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:10:17.122002 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:17.122030 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:17.122059 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:10:17.122072 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:10:17.122085 kernel: signal: max sigframe size: 3632 Sep 13 00:10:17.122099 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:10:17.122114 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:10:17.122128 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:10:17.122144 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:10:17.122156 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:10:17.122170 kernel: .... node #0, CPUs: #1 Sep 13 00:10:17.122185 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 13 00:10:17.122201 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:10:17.122213 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:10:17.122225 kernel: smpboot: Max logical packages: 1 Sep 13 00:10:17.122240 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Sep 13 00:10:17.122256 kernel: devtmpfs: initialized Sep 13 00:10:17.122273 kernel: x86/mm: Memory block size: 128MB Sep 13 00:10:17.122291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 13 00:10:17.122304 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:10:17.122317 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:10:17.122332 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:10:17.122346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:10:17.122359 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:10:17.122372 kernel: audit: type=2000 audit(1757722215.029:1): state=initialized audit_enabled=0 res=1 Sep 13 00:10:17.122390 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:10:17.122404 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:10:17.122419 kernel: cpuidle: using governor menu Sep 13 00:10:17.122433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:10:17.122448 kernel: dca service started, version 1.12.1 Sep 13 00:10:17.122463 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 13 00:10:17.122478 kernel: e820: reserve RAM buffer [mem 0x3ffd3000-0x3fffffff] Sep 13 00:10:17.122493 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:10:17.122508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:10:17.122527 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:10:17.122541 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:10:17.122575 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:10:17.122591 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:10:17.122606 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:10:17.122621 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:10:17.122636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:10:17.122651 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:10:17.122666 kernel: ACPI: Interpreter enabled Sep 13 00:10:17.122685 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:10:17.122700 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:10:17.122715 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:10:17.122730 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:10:17.122746 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 13 00:10:17.122760 kernel: iommu: Default domain type: Translated Sep 13 00:10:17.122776 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:10:17.122791 kernel: efivars: Registered efivars operations Sep 13 00:10:17.122804 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:10:17.122821 kernel: PCI: System does not support PCI Sep 13 00:10:17.122835 kernel: vgaarb: loaded Sep 13 00:10:17.122850 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 13 00:10:17.122865 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:10:17.122879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:10:17.122894 kernel: pnp: PnP ACPI init Sep 13 00:10:17.122909 kernel: pnp: PnP ACPI: found 3 devices Sep 13 00:10:17.122924 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:10:17.122938 kernel: NET: Registered PF_INET protocol family Sep 13 00:10:17.122956 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:10:17.122971 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:10:17.122985 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:10:17.122998 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:10:17.123013 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:10:17.123027 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:10:17.123041 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.123056 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:10:17.123069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:10:17.123087 kernel: NET: Registered PF_XDP protocol family Sep 13 00:10:17.123101 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:10:17.123115 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:10:17.123130 kernel: software IO TLB: mapped [mem 0x000000003b339000-0x000000003f339000] (64MB) Sep 13 00:10:17.123144 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:10:17.123158 kernel: Initialise system trusted keyrings Sep 13 00:10:17.123172 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:10:17.123186 kernel: Key type asymmetric registered Sep 13 00:10:17.123200 kernel: Asymmetric key parser 'x509' registered Sep 13 00:10:17.123216 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:10:17.123230 kernel: io scheduler mq-deadline registered Sep 13 00:10:17.123245 kernel: io scheduler kyber registered Sep 13 00:10:17.123259 kernel: io scheduler bfq registered Sep 13 00:10:17.123272 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:10:17.123287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:10:17.123301 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:10:17.123316 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:10:17.123330 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:10:17.123506 kernel: rtc_cmos 00:02: registered as rtc0 Sep 13 00:10:17.123648 kernel: rtc_cmos 00:02: setting system clock to 2025-09-13T00:10:16 UTC (1757722216) Sep 13 00:10:17.123774 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 13 00:10:17.123791 kernel: intel_pstate: CPU model not supported Sep 13 00:10:17.123806 kernel: efifb: probing for efifb Sep 13 00:10:17.123821 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:10:17.123834 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:10:17.123848 kernel: efifb: scrolling: redraw Sep 13 00:10:17.123867 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:10:17.123881 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:10:17.123895 kernel: fb0: EFI VGA frame buffer device Sep 13 00:10:17.123909 kernel: pstore: Using crash dump compression: deflate Sep 13 00:10:17.123922 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:10:17.123936 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:10:17.123948 kernel: Segment Routing with IPv6 Sep 13 00:10:17.123961 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:10:17.123976 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:10:17.123993 kernel: Key type dns_resolver registered Sep 13 00:10:17.124007 kernel: IPI shorthand broadcast: enabled Sep 13 00:10:17.124022 kernel: sched_clock: Marking stable (945002600, 54804200)->(1237797300, -237990500) Sep 13 00:10:17.124035 kernel: registered taskstats version 1 Sep 13 00:10:17.124049 kernel: Loading compiled-in X.509 certificates Sep 13 00:10:17.124063 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:10:17.124076 kernel: Key type .fscrypt registered Sep 13 00:10:17.124089 kernel: Key type fscrypt-provisioning registered Sep 13 00:10:17.124102 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:10:17.124119 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:10:17.124134 kernel: ima: No architecture policies found Sep 13 00:10:17.124148 kernel: clk: Disabling unused clocks Sep 13 00:10:17.124161 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:10:17.124187 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:10:17.124205 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:10:17.124218 kernel: Run /init as init process Sep 13 00:10:17.124229 kernel: with arguments: Sep 13 00:10:17.124244 kernel: /init Sep 13 00:10:17.124261 kernel: with environment: Sep 13 00:10:17.124279 kernel: HOME=/ Sep 13 00:10:17.124292 kernel: TERM=linux Sep 13 00:10:17.124304 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:10:17.124321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:10:17.124339 systemd[1]: Detected virtualization microsoft. Sep 13 00:10:17.124355 systemd[1]: Detected architecture x86-64. Sep 13 00:10:17.124374 systemd[1]: Running in initrd. Sep 13 00:10:17.124389 systemd[1]: No hostname configured, using default hostname. Sep 13 00:10:17.124404 systemd[1]: Hostname set to . Sep 13 00:10:17.124421 systemd[1]: Initializing machine ID from random generator. Sep 13 00:10:17.124436 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:10:17.124452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:17.124467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:17.124484 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:10:17.124503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:10:17.124518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:10:17.124533 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:10:17.124807 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:10:17.124828 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:10:17.124843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:17.124858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:17.124878 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:10:17.124893 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:10:17.124908 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:10:17.124922 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:10:17.124938 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:17.124953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:17.124968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:10:17.124983 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:10:17.124998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:17.125016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:17.125031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:17.125046 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:10:17.125061 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:10:17.125076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:10:17.125091 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:10:17.125106 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:10:17.125121 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:10:17.125136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:10:17.125175 systemd-journald[176]: Collecting audit messages is disabled. Sep 13 00:10:17.125208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:17.125223 systemd-journald[176]: Journal started Sep 13 00:10:17.125259 systemd-journald[176]: Runtime Journal (/run/log/journal/2d8fcf731cc947989c83285752018a4c) is 8.0M, max 158.8M, 150.8M free. Sep 13 00:10:17.122306 systemd-modules-load[177]: Inserted module 'overlay' Sep 13 00:10:17.133096 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:10:17.137148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:17.140821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:17.144640 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:10:17.175256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:10:17.166762 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:10:17.181498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:10:17.187592 kernel: Bridge firewalling registered Sep 13 00:10:17.190285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:17.193472 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 13 00:10:17.203369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:17.207075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:17.217511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:17.227732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:17.233680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:10:17.238089 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:10:17.256136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:17.264545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:17.267984 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:17.280793 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:10:17.288720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:10:17.303581 dracut-cmdline[212]: dracut-dracut-053 Sep 13 00:10:17.308591 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:10:17.327583 systemd-resolved[213]: Positive Trust Anchors: Sep 13 00:10:17.330226 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:17.334655 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:10:17.347837 systemd-resolved[213]: Defaulting to hostname 'linux'. Sep 13 00:10:17.348754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:10:17.356472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:17.400575 kernel: SCSI subsystem initialized Sep 13 00:10:17.412571 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:10:17.423578 kernel: iscsi: registered transport (tcp) Sep 13 00:10:17.444741 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:10:17.444803 kernel: QLogic iSCSI HBA Driver Sep 13 00:10:17.480190 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:10:17.490738 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:10:17.519884 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:10:17.519963 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:10:17.524573 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:10:17.564580 kernel: raid6: avx512x4 gen() 18451 MB/s Sep 13 00:10:17.583567 kernel: raid6: avx512x2 gen() 18619 MB/s Sep 13 00:10:17.602566 kernel: raid6: avx512x1 gen() 17730 MB/s Sep 13 00:10:17.622568 kernel: raid6: avx2x4 gen() 18577 MB/s Sep 13 00:10:17.641564 kernel: raid6: avx2x2 gen() 18534 MB/s Sep 13 00:10:17.662133 kernel: raid6: avx2x1 gen() 14081 MB/s Sep 13 00:10:17.662163 kernel: raid6: using algorithm avx512x2 gen() 18619 MB/s Sep 13 00:10:17.684905 kernel: raid6: .... xor() 30454 MB/s, rmw enabled Sep 13 00:10:17.684945 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:10:17.707569 kernel: xor: automatically using best checksumming function avx Sep 13 00:10:17.855580 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:10:17.865474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:10:17.875746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:17.893331 systemd-udevd[396]: Using default interface naming scheme 'v255'. Sep 13 00:10:17.900221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:17.913776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:10:17.927249 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 13 00:10:17.956640 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:10:17.965705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:10:18.011919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:18.023740 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:10:18.054081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:10:18.062997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:10:18.071564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:18.078643 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:10:18.089761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:10:18.111491 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:10:18.118104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:10:18.145572 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:10:18.145618 kernel: AES CTR mode by8 optimization enabled Sep 13 00:10:18.146577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:18.146799 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:18.159277 kernel: hv_vmbus: Vmbus version:5.2 Sep 13 00:10:18.160093 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:18.163626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:18.163880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:18.167068 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:18.190888 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:10:18.201790 kernel: scsi host0: storvsc_host_t Sep 13 00:10:18.201857 kernel: scsi host1: storvsc_host_t Sep 13 00:10:18.195311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:18.217550 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:10:18.217695 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:10:18.217787 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:10:18.221654 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:10:18.221694 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:10:18.234584 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 00:10:18.241580 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 13 00:10:18.248607 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:10:18.254243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:18.270631 kernel: PTP clock support registered Sep 13 00:10:18.266768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:10:18.291010 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:10:18.291057 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:10:18.299780 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:10:18.299824 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:10:18.299847 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:10:18.300569 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:10:18.734451 systemd-resolved[213]: Clock change detected. Flushing caches. Sep 13 00:10:18.748015 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 00:10:18.748045 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:10:18.748219 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:10:18.748348 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:10:18.760480 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:10:18.762542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:18.784494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:10:18.791943 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:10:18.792274 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:10:18.795333 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:10:18.795556 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:10:18.800480 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:10:18.809031 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:10:18.809314 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:10:18.819485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:10:18.866389 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: VF slot 1 added Sep 13 00:10:18.874708 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:10:18.879489 kernel: hv_pci a6b1ef4f-da14-47de-970c-b98603098d18: PCI VMBus probing: Using version 0x10004 Sep 13 00:10:18.886705 kernel: hv_pci a6b1ef4f-da14-47de-970c-b98603098d18: PCI host bridge to bus da14:00 Sep 13 00:10:18.886978 kernel: pci_bus da14:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 13 00:10:18.890407 kernel: pci_bus da14:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:10:18.895496 kernel: pci da14:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 13 00:10:18.901505 kernel: pci da14:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:10:18.905483 kernel: pci da14:00:02.0: enabling Extended Tags Sep 13 00:10:18.918483 kernel: pci da14:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at da14:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 13 00:10:18.925505 kernel: pci_bus da14:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:10:18.925797 kernel: pci da14:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:10:19.092821 kernel: mlx5_core da14:00:02.0: enabling device (0000 -> 0002) Sep 13 00:10:19.097487 kernel: mlx5_core da14:00:02.0: firmware version: 14.30.5000 Sep 13 00:10:19.287494 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (458) Sep 13 00:10:19.305804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:10:19.336655 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (467) Sep 13 00:10:19.336715 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: VF registering: eth1 Sep 13 00:10:19.338145 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 13 00:10:19.351908 kernel: mlx5_core da14:00:02.0 eth1: joined to eth0 Sep 13 00:10:19.352126 kernel: mlx5_core da14:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:10:19.362489 kernel: mlx5_core da14:00:02.0 enP55828s1: renamed from eth1 Sep 13 00:10:19.364349 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 13 00:10:19.373607 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 13 00:10:19.391603 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:10:19.432391 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 13 00:10:20.417441 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:10:20.417871 disk-uuid[599]: The operation has completed successfully. Sep 13 00:10:20.497314 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:10:20.497432 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:10:20.525660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:10:20.533104 sh[717]: Success Sep 13 00:10:20.556489 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:10:20.822906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:10:20.839409 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:10:20.845334 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:10:20.863485 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:10:20.863531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:20.870175 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:10:20.873392 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:10:20.876578 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:10:21.213453 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:10:21.215394 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:10:21.229697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:10:21.235795 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:10:21.262483 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:21.262535 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:21.266159 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:10:21.323519 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:10:21.331128 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:10:21.346703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:10:21.349842 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:21.354352 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:10:21.361710 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:10:21.374691 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:10:21.384872 systemd-networkd[897]: lo: Link UP Sep 13 00:10:21.384882 systemd-networkd[897]: lo: Gained carrier Sep 13 00:10:21.386778 systemd-networkd[897]: Enumeration completed Sep 13 00:10:21.387556 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:21.387562 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:21.392538 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:10:21.397186 systemd[1]: Reached target network.target - Network. Sep 13 00:10:21.454484 kernel: mlx5_core da14:00:02.0 enP55828s1: Link up Sep 13 00:10:21.454744 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:10:21.492570 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: Data path switched to VF: enP55828s1 Sep 13 00:10:21.492891 systemd-networkd[897]: enP55828s1: Link UP Sep 13 00:10:21.493018 systemd-networkd[897]: eth0: Link UP Sep 13 00:10:21.493206 systemd-networkd[897]: eth0: Gained carrier Sep 13 00:10:21.493219 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:21.497363 systemd-networkd[897]: enP55828s1: Gained carrier Sep 13 00:10:21.532501 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:10:22.574680 systemd-networkd[897]: eth0: Gained IPv6LL Sep 13 00:10:22.616227 ignition[901]: Ignition 2.19.0 Sep 13 00:10:22.616240 ignition[901]: Stage: fetch-offline Sep 13 00:10:22.616282 ignition[901]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.616296 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.616408 ignition[901]: parsed url from cmdline: "" Sep 13 00:10:22.616412 ignition[901]: no config URL provided Sep 13 00:10:22.616419 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:10:22.616430 ignition[901]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:10:22.616438 ignition[901]: failed to fetch config: resource requires networking Sep 13 00:10:22.633128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:10:22.616791 ignition[901]: Ignition finished successfully Sep 13 00:10:22.648568 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:10:22.664413 ignition[910]: Ignition 2.19.0 Sep 13 00:10:22.664424 ignition[910]: Stage: fetch Sep 13 00:10:22.664663 ignition[910]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.664673 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.666459 ignition[910]: parsed url from cmdline: "" Sep 13 00:10:22.666472 ignition[910]: no config URL provided Sep 13 00:10:22.666516 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:10:22.666528 ignition[910]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:10:22.666552 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:10:22.754407 ignition[910]: GET result: OK Sep 13 00:10:22.754695 ignition[910]: config has been read from IMDS userdata Sep 13 00:10:22.754768 ignition[910]: parsing config with SHA512: cff578b6b698b86bb454de5d76c86606a0a7632b856d449343ac297da33eaef23f2166365432f9727c8532c4ff4d6be00b316de0c96099181f703988fd656a37 Sep 13 00:10:22.762852 unknown[910]: fetched base config from "system" Sep 13 00:10:22.763200 ignition[910]: fetch: fetch complete Sep 13 00:10:22.762867 unknown[910]: fetched base config from "system" Sep 13 00:10:22.763204 ignition[910]: fetch: fetch passed Sep 13 00:10:22.762875 unknown[910]: fetched user config from "azure" Sep 13 00:10:22.763240 ignition[910]: Ignition finished successfully Sep 13 00:10:22.766042 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:10:22.777802 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:10:22.803064 ignition[916]: Ignition 2.19.0 Sep 13 00:10:22.803080 ignition[916]: Stage: kargs Sep 13 00:10:22.806236 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:10:22.803296 ignition[916]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.803309 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.804527 ignition[916]: kargs: kargs passed Sep 13 00:10:22.818576 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:10:22.804567 ignition[916]: Ignition finished successfully Sep 13 00:10:22.832044 ignition[922]: Ignition 2.19.0 Sep 13 00:10:22.832056 ignition[922]: Stage: disks Sep 13 00:10:22.834210 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:10:22.832256 ignition[922]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:22.838842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:10:22.832269 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:22.846785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:10:22.833109 ignition[922]: disks: disks passed Sep 13 00:10:22.833148 ignition[922]: Ignition finished successfully Sep 13 00:10:22.865180 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:10:22.868031 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:10:22.876958 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:10:22.888663 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:10:22.937138 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 13 00:10:22.944092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:10:22.957637 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:10:23.053480 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:10:23.053979 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:10:23.059048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:10:23.095565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:10:23.113485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Sep 13 00:10:23.118481 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:23.118520 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:23.124918 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:10:23.128639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:10:23.137659 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:10:23.138660 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:10:23.143046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:10:23.143077 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:10:23.149190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:10:23.149312 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:10:23.178620 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:10:23.839719 coreos-metadata[956]: Sep 13 00:10:23.839 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:10:23.846655 coreos-metadata[956]: Sep 13 00:10:23.846 INFO Fetch successful Sep 13 00:10:23.850013 coreos-metadata[956]: Sep 13 00:10:23.847 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:10:23.856820 coreos-metadata[956]: Sep 13 00:10:23.856 INFO Fetch successful Sep 13 00:10:23.870061 coreos-metadata[956]: Sep 13 00:10:23.870 INFO wrote hostname ci-4081.3.5-n-78cb87e672 to /sysroot/etc/hostname Sep 13 00:10:23.871789 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:10:23.886168 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:10:23.912834 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:10:23.935060 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:10:23.949785 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:10:25.270325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:10:25.279574 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:10:25.293498 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:25.293631 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:10:25.298250 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:10:25.327366 ignition[1061]: INFO : Ignition 2.19.0 Sep 13 00:10:25.327366 ignition[1061]: INFO : Stage: mount Sep 13 00:10:25.332542 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:25.332542 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:25.336517 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:10:25.342254 ignition[1061]: INFO : mount: mount passed Sep 13 00:10:25.342254 ignition[1061]: INFO : Ignition finished successfully Sep 13 00:10:25.348362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:10:25.361601 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:10:25.370312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:10:25.400485 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Sep 13 00:10:25.407618 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:10:25.407682 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:10:25.410562 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:10:25.418487 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:10:25.420334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:10:25.448821 ignition[1089]: INFO : Ignition 2.19.0 Sep 13 00:10:25.451417 ignition[1089]: INFO : Stage: files Sep 13 00:10:25.451417 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:25.451417 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:25.460253 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:10:25.467378 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:10:25.467378 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:10:25.532289 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:10:25.537014 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:10:25.537014 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:10:25.532734 unknown[1089]: wrote ssh authorized keys file for user: core Sep 13 00:10:25.563645 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:10:25.569396 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:10:25.626447 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:10:25.734179 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:10:25.740064 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:25.740064 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:10:25.740064 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:25.755719 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:10:25.755719 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:25.765886 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:10:25.765886 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:25.776192 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:10:25.781372 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:25.786746 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:10:26.121658 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:10:26.444572 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:10:26.444572 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:10:26.478776 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:10:26.484528 ignition[1089]: INFO : files: files passed Sep 13 00:10:26.484528 ignition[1089]: INFO : Ignition finished successfully Sep 13 00:10:26.480651 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:10:26.519956 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:10:26.525612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:10:26.533010 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:10:26.533124 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:10:26.549612 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:26.549612 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:26.548130 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:10:26.568642 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:10:26.553294 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:10:26.576700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:10:26.600397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:10:26.600529 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:10:26.606918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:10:26.612638 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:10:26.618144 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:10:26.626654 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:10:26.641254 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:10:26.656605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:10:26.668471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:26.675536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:26.679158 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:10:26.687684 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:10:26.687826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:10:26.697802 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:10:26.700909 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:10:26.703588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:10:26.709353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:10:26.715407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:10:26.721717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:10:26.728045 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:10:26.734902 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:10:26.744068 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:10:26.752519 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:10:26.754774 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:10:26.754893 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:10:26.760842 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:26.766314 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:26.772674 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:10:26.775572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:26.786514 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:10:26.786635 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:10:26.792747 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:10:26.792892 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:10:26.804988 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:10:26.805093 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:10:26.817159 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:10:26.817330 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:10:26.831661 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:10:26.839606 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:10:26.843685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:10:26.849381 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:26.853034 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:10:26.866016 ignition[1142]: INFO : Ignition 2.19.0 Sep 13 00:10:26.866016 ignition[1142]: INFO : Stage: umount Sep 13 00:10:26.866016 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:10:26.866016 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:10:26.866016 ignition[1142]: INFO : umount: umount passed Sep 13 00:10:26.866016 ignition[1142]: INFO : Ignition finished successfully Sep 13 00:10:26.853223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:10:26.867778 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:10:26.867896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:10:26.874152 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:10:26.874405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:10:26.885580 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:10:26.885634 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:10:26.892868 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:10:26.895861 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:10:26.901551 systemd[1]: Stopped target network.target - Network. Sep 13 00:10:26.906968 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:10:26.907040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:10:26.912874 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:10:26.918517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:10:26.921191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:26.924913 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:10:26.927529 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:10:26.930351 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:10:26.930397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:10:26.936215 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:10:26.936264 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:10:26.941838 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:10:26.941899 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:10:26.949962 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:10:26.952825 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:10:26.964292 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:10:26.974062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:10:26.978736 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:10:26.979423 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:10:26.980498 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:10:26.983355 systemd-networkd[897]: eth0: DHCPv6 lease lost Sep 13 00:10:26.985728 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:10:26.985838 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:10:27.015592 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:10:27.015711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:10:27.023331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:10:27.023443 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:27.041985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:10:27.044719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:10:27.044779 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:10:27.051737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:10:27.051784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:27.058740 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:10:27.058790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:27.067901 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:10:27.067961 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:27.075248 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:27.105489 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:10:27.105632 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:27.114308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:10:27.114395 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:27.119043 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:10:27.119090 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:27.126136 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:10:27.126192 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:10:27.131764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:10:27.131814 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:10:27.160775 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: Data path switched from VF: enP55828s1 Sep 13 00:10:27.137865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:10:27.137915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:10:27.149681 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:10:27.164889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:10:27.164951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:27.170515 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:10:27.170574 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:27.177358 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:10:27.177413 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:27.183876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:27.183930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:27.190341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:10:27.190492 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:10:27.204639 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:10:27.204994 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:10:27.467932 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:10:27.468066 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:10:27.474079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:10:27.475156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:10:27.475210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:10:27.495689 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:10:27.874326 systemd[1]: Switching root. Sep 13 00:10:27.951705 systemd-journald[176]: Journal stopped Sep 13 00:10:34.652164 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 13 00:10:34.652213 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:10:34.652233 kernel: SELinux: policy capability open_perms=1 Sep 13 00:10:34.652250 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:10:34.652265 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:10:34.652283 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:10:34.652302 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:10:34.652324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:10:34.652341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:10:34.652360 kernel: audit: type=1403 audit(1757722229.313:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:10:34.652378 systemd[1]: Successfully loaded SELinux policy in 242.847ms. Sep 13 00:10:34.652397 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.574ms. Sep 13 00:10:34.652419 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:10:34.652437 systemd[1]: Detected virtualization microsoft. Sep 13 00:10:34.652476 systemd[1]: Detected architecture x86-64. Sep 13 00:10:34.652499 systemd[1]: Detected first boot. Sep 13 00:10:34.652523 systemd[1]: Hostname set to . Sep 13 00:10:34.652543 systemd[1]: Initializing machine ID from random generator. Sep 13 00:10:34.652566 zram_generator::config[1185]: No configuration found. Sep 13 00:10:34.652598 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:10:34.652621 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:10:34.652642 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:10:34.652666 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:10:34.652688 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:10:34.652709 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:10:34.652732 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:10:34.652759 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:10:34.652782 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:10:34.652799 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:10:34.652820 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:10:34.652846 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:10:34.652867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:10:34.652888 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:10:34.652911 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:10:34.652935 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:10:34.652951 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:10:34.652966 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:10:34.652980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:10:34.652995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:10:34.653010 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:10:34.653030 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:10:34.653041 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:10:34.653057 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:10:34.653067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:10:34.653080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:10:34.653091 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:10:34.653102 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:10:34.653114 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:10:34.653124 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:10:34.653138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:10:34.653149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:10:34.653162 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:10:34.653173 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:10:34.653184 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:10:34.653198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:10:34.653209 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:10:34.653221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:34.653232 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:10:34.653245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:10:34.653255 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:10:34.653268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:10:34.653280 systemd[1]: Reached target machines.target - Containers. Sep 13 00:10:34.653293 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:10:34.653305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:34.653315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:10:34.653329 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:10:34.653339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:34.653351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:10:34.653362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:34.653373 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:10:34.653386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:34.653399 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:10:34.653412 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:10:34.653422 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:10:34.653435 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:10:34.653445 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:10:34.653456 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:10:34.653505 kernel: fuse: init (API version 7.39) Sep 13 00:10:34.653519 kernel: loop: module loaded Sep 13 00:10:34.653534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:10:34.653548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:10:34.653559 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:10:34.653571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:10:34.653583 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:10:34.653595 systemd[1]: Stopped verity-setup.service. Sep 13 00:10:34.653605 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:34.653637 systemd-journald[1274]: Collecting audit messages is disabled. Sep 13 00:10:34.653663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:10:34.653677 systemd-journald[1274]: Journal started Sep 13 00:10:34.653700 systemd-journald[1274]: Runtime Journal (/run/log/journal/4ed2c1fc9a8244d9a61f108ce5347cde) is 8.0M, max 158.8M, 150.8M free. Sep 13 00:10:33.797626 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:10:33.968154 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:10:33.968581 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:10:34.662526 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:10:34.663072 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:10:34.666340 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:10:34.670073 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:10:34.673388 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:10:34.677168 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:10:34.680596 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:10:34.684617 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:10:34.688753 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:10:34.688988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:10:34.693209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:34.693440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:34.697683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:34.697923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:34.702035 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:10:34.702283 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:10:34.705708 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:34.705947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:34.709777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:10:34.713621 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:10:34.721494 kernel: ACPI: bus type drm_connector registered Sep 13 00:10:34.720779 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:10:34.725838 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:10:34.725995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:10:34.739706 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:10:34.750562 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:10:34.766545 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:10:34.771989 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:10:34.772046 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:10:34.777560 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:10:34.793298 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:10:34.802720 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:10:34.806202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:34.821653 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:10:34.826214 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:10:34.829424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:34.834595 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:10:34.837969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:10:34.841649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:10:34.852866 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:10:34.860154 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:10:34.870537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:10:34.876458 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:10:34.880258 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:10:34.886572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:10:34.890482 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:10:34.898594 systemd-journald[1274]: Time spent on flushing to /var/log/journal/4ed2c1fc9a8244d9a61f108ce5347cde is 61.543ms for 961 entries. Sep 13 00:10:34.898594 systemd-journald[1274]: System Journal (/var/log/journal/4ed2c1fc9a8244d9a61f108ce5347cde) is 8.0M, max 2.6G, 2.6G free. Sep 13 00:10:35.021381 systemd-journald[1274]: Received client request to flush runtime journal. Sep 13 00:10:35.021596 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:10:34.898646 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:10:34.915262 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:10:34.922626 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:10:34.953977 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:10:35.022865 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:10:35.079173 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:10:35.080058 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:10:35.115255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:10:35.212656 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 13 00:10:35.212684 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 13 00:10:35.232362 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:10:35.241626 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:10:35.572495 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:10:35.646518 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:10:35.779859 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:10:35.792617 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:10:35.809007 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 13 00:10:35.810011 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 13 00:10:35.815396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:10:36.175493 kernel: loop2: detected capacity change from 0 to 221472 Sep 13 00:10:36.233493 kernel: loop3: detected capacity change from 0 to 31056 Sep 13 00:10:36.706490 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:10:36.785565 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:10:36.807497 kernel: loop6: detected capacity change from 0 to 221472 Sep 13 00:10:36.808543 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:10:36.817633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:10:36.835493 kernel: loop7: detected capacity change from 0 to 31056 Sep 13 00:10:36.844300 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 13 00:10:36.844655 (sd-merge)[1349]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 13 00:10:36.845179 (sd-merge)[1349]: Merged extensions into '/usr'. Sep 13 00:10:36.848887 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:10:36.848903 systemd[1]: Reloading... Sep 13 00:10:36.921563 zram_generator::config[1376]: No configuration found. Sep 13 00:10:37.058436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:37.115681 systemd[1]: Reloading finished in 266 ms. Sep 13 00:10:37.146814 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:10:37.161697 systemd[1]: Starting ensure-sysext.service... Sep 13 00:10:37.165010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:10:37.204663 systemd[1]: Reloading requested from client PID 1435 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:10:37.204680 systemd[1]: Reloading... Sep 13 00:10:37.231143 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:10:37.231666 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:10:37.232979 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:10:37.233398 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Sep 13 00:10:37.233529 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Sep 13 00:10:37.279603 zram_generator::config[1464]: No configuration found. Sep 13 00:10:37.294403 systemd-tmpfiles[1436]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:10:37.294423 systemd-tmpfiles[1436]: Skipping /boot Sep 13 00:10:37.305970 systemd-tmpfiles[1436]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:10:37.305984 systemd-tmpfiles[1436]: Skipping /boot Sep 13 00:10:37.409685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:37.468127 systemd[1]: Reloading finished in 263 ms. Sep 13 00:10:37.491934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:10:37.504825 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:37.519267 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:10:37.532153 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:10:37.541807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:10:37.552915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:10:37.558315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:37.558915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:37.560174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:37.567738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:37.574635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:37.579375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:37.579616 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:37.581102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:37.581332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:37.586787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:37.586968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:37.593309 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:37.593538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:37.602841 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:37.603098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:37.608998 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:37.613866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:37.625737 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:37.629270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:37.629466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:37.630396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:10:37.635080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:37.635242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:37.639780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:37.639954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:37.644322 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:37.644675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:37.651860 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:10:37.682130 systemd[1]: Finished ensure-sysext.service. Sep 13 00:10:37.688059 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 13 00:10:37.695392 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:37.695709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:10:37.704482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:10:37.717455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:10:37.732916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:10:37.739599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:10:37.747307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:10:37.757615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:10:37.761747 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:10:37.784414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:10:37.789548 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:10:37.790505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:10:37.790711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:10:37.802941 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:10:37.803125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:10:37.817382 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:10:37.822869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:10:37.824519 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:10:37.844674 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:10:37.845861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:10:37.855457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:10:37.855544 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:10:37.893362 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:10:37.902650 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:10:37.933854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 13 00:10:37.963516 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:10:37.969482 kernel: hv_vmbus: registering driver hv_balloon Sep 13 00:10:37.973508 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 00:10:37.975333 augenrules[1603]: No rules Sep 13 00:10:37.976906 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:37.990905 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 00:10:37.990953 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 00:10:37.990975 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 00:10:37.998340 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:10:38.002491 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:10:38.043753 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 13 00:10:38.157776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:38.193033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:10:38.193278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:38.200922 systemd-resolved[1529]: Positive Trust Anchors: Sep 13 00:10:38.200946 systemd-resolved[1529]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:10:38.200994 systemd-resolved[1529]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:10:38.218979 systemd-networkd[1578]: lo: Link UP Sep 13 00:10:38.219347 systemd-networkd[1578]: lo: Gained carrier Sep 13 00:10:38.223858 systemd-networkd[1578]: Enumeration completed Sep 13 00:10:38.224545 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:38.224639 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:10:38.228873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:10:38.237486 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1563) Sep 13 00:10:38.240519 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:10:38.247178 systemd-resolved[1529]: Using system hostname 'ci-4081.3.5-n-78cb87e672'. Sep 13 00:10:38.266399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:10:38.310492 kernel: mlx5_core da14:00:02.0 enP55828s1: Link up Sep 13 00:10:38.314406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:10:38.333839 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:10:38.333485 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:10:38.370051 kernel: hv_netvsc 6045bddf-ab3c-6045-bddf-ab3c6045bddf eth0: Data path switched to VF: enP55828s1 Sep 13 00:10:38.372971 systemd-networkd[1578]: enP55828s1: Link UP Sep 13 00:10:38.373124 systemd-networkd[1578]: eth0: Link UP Sep 13 00:10:38.373129 systemd-networkd[1578]: eth0: Gained carrier Sep 13 00:10:38.373151 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:10:38.379064 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:10:38.382543 systemd[1]: Reached target network.target - Network. Sep 13 00:10:38.387058 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:10:38.390699 systemd-networkd[1578]: enP55828s1: Gained carrier Sep 13 00:10:38.449206 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:10:38.474354 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:10:38.505489 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 13 00:10:39.467259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:10:39.726696 systemd-networkd[1578]: eth0: Gained IPv6LL Sep 13 00:10:39.729572 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:10:39.734215 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:10:39.921435 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:10:39.925735 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:10:40.056677 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:10:40.066693 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:10:40.224850 lvm[1673]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:10:40.261257 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:10:40.266010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:10:40.274668 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:10:40.280297 lvm[1675]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:10:40.309651 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:10:56.537373 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:10:56.553363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:10:56.561676 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:10:57.558299 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:10:57.562315 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:10:57.565818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:10:57.569279 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:10:57.573116 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:10:57.576411 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:10:57.579957 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:10:57.583812 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:10:57.583861 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:10:57.586644 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:10:57.612187 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:10:57.617089 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:10:57.632877 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:10:57.636981 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:10:57.640203 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:10:57.642914 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:10:57.645540 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:10:57.645568 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:10:57.661591 systemd[1]: Starting chronyd.service - NTP client/server... Sep 13 00:10:57.667593 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:10:57.675627 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:10:57.686637 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:10:57.691928 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:10:57.697369 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:10:57.700735 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:10:57.700792 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 13 00:10:57.710152 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 13 00:10:57.714071 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 13 00:10:57.721887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:57.725905 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 13 00:10:57.730970 jq[1686]: false Sep 13 00:10:57.732293 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:10:57.738440 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:10:57.742670 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:10:57.749690 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:10:57.756640 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:10:57.764905 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:10:57.772425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:10:57.774541 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:10:57.780656 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:10:57.790629 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:10:57.795889 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:10:57.796132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:10:57.804898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:10:57.805155 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:10:57.809825 chronyd[1714]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 13 00:10:57.825409 extend-filesystems[1687]: Found loop4 Sep 13 00:10:57.830714 extend-filesystems[1687]: Found loop5 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found loop6 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found loop7 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda1 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda2 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda3 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found usr Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda4 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda6 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda7 Sep 13 00:10:57.838399 extend-filesystems[1687]: Found sda9 Sep 13 00:10:57.838399 extend-filesystems[1687]: Checking size of /dev/sda9 Sep 13 00:10:57.912420 jq[1703]: true Sep 13 00:10:57.870036 chronyd[1714]: Timezone right/UTC failed leap second check, ignoring Sep 13 00:10:57.839461 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:10:57.870350 chronyd[1714]: Loaded seccomp filter (level 2) Sep 13 00:10:57.875918 systemd[1]: Started chronyd.service - NTP client/server. Sep 13 00:10:57.914407 jq[1724]: true Sep 13 00:10:57.883153 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:10:57.883395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:10:57.902148 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:10:57.922600 extend-filesystems[1687]: Old size kept for /dev/sda9 Sep 13 00:10:57.928761 extend-filesystems[1687]: Found sr0 Sep 13 00:10:57.931842 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:10:57.932088 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:10:57.944221 update_engine[1702]: I20250913 00:10:57.942601 1702 main.cc:92] Flatcar Update Engine starting Sep 13 00:10:57.949239 tar[1710]: linux-amd64/helm Sep 13 00:10:58.027895 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1761) Sep 13 00:10:58.038965 systemd-logind[1699]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:10:58.042115 systemd-logind[1699]: New seat seat0. Sep 13 00:10:58.054129 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:10:58.062411 bash[1757]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:10:58.064077 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:10:58.070616 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:10:58.086105 dbus-daemon[1685]: [system] SELinux support is enabled Sep 13 00:10:58.086292 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:10:58.097109 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:10:58.097276 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:10:58.102304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:10:58.102329 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:10:58.108026 dbus-daemon[1685]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:10:58.117763 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:10:58.125615 update_engine[1702]: I20250913 00:10:58.125420 1702 update_check_scheduler.cc:74] Next update check in 4m10s Sep 13 00:10:58.132524 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:10:58.281940 coreos-metadata[1684]: Sep 13 00:10:58.281 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:10:58.285677 coreos-metadata[1684]: Sep 13 00:10:58.284 INFO Fetch successful Sep 13 00:10:58.285677 coreos-metadata[1684]: Sep 13 00:10:58.285 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 13 00:10:58.292717 coreos-metadata[1684]: Sep 13 00:10:58.291 INFO Fetch successful Sep 13 00:10:58.292717 coreos-metadata[1684]: Sep 13 00:10:58.292 INFO Fetching http://168.63.129.16/machine/7c4d883b-5b25-4424-adaf-7e5cbbf68187/a958ea12%2Db0e1%2D4f03%2D8b76%2D71de6de94991.%5Fci%2D4081.3.5%2Dn%2D78cb87e672?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 13 00:10:58.301308 coreos-metadata[1684]: Sep 13 00:10:58.297 INFO Fetch successful Sep 13 00:10:58.301308 coreos-metadata[1684]: Sep 13 00:10:58.297 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:10:58.314327 coreos-metadata[1684]: Sep 13 00:10:58.312 INFO Fetch successful Sep 13 00:10:58.390194 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:10:58.398746 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:10:58.572906 locksmithd[1778]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:10:58.702033 sshd_keygen[1730]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:10:58.750448 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:10:58.764001 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:10:58.769696 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 13 00:10:58.775737 KVP[1688]: KVP starting; pid is:1688 Sep 13 00:10:58.804188 KVP[1688]: KVP LIC Version: 3.1 Sep 13 00:10:58.804484 kernel: hv_utils: KVP IC version 4.0 Sep 13 00:10:58.808205 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:10:58.809278 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:10:58.824033 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 13 00:10:58.839789 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:10:58.867339 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:10:58.881458 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:10:58.892637 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:10:58.897233 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:10:59.008398 tar[1710]: linux-amd64/LICENSE Sep 13 00:10:59.008398 tar[1710]: linux-amd64/README.md Sep 13 00:10:59.021879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:10:59.494164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:59.499509 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:00.011355 containerd[1720]: time="2025-09-13T00:11:00.008334000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:11:00.044186 containerd[1720]: time="2025-09-13T00:11:00.043825600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.045876 containerd[1720]: time="2025-09-13T00:11:00.045836600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:00.045876 containerd[1720]: time="2025-09-13T00:11:00.045869500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:11:00.046019 containerd[1720]: time="2025-09-13T00:11:00.045889000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:11:00.046133 containerd[1720]: time="2025-09-13T00:11:00.046089900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:11:00.046133 containerd[1720]: time="2025-09-13T00:11:00.046117600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.046214 containerd[1720]: time="2025-09-13T00:11:00.046196700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:00.046260 containerd[1720]: time="2025-09-13T00:11:00.046214400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.046427100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.046452800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.046511100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.046528400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.046631100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.046879500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.047042500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:11:00.047118 containerd[1720]: time="2025-09-13T00:11:00.047062700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:11:00.047433 containerd[1720]: time="2025-09-13T00:11:00.047190800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:11:00.047433 containerd[1720]: time="2025-09-13T00:11:00.047249500Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:11:00.065207 containerd[1720]: time="2025-09-13T00:11:00.065160000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:11:00.065338 containerd[1720]: time="2025-09-13T00:11:00.065278400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:11:00.065405 containerd[1720]: time="2025-09-13T00:11:00.065344800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:11:00.065405 containerd[1720]: time="2025-09-13T00:11:00.065383400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:11:00.065482 containerd[1720]: time="2025-09-13T00:11:00.065405300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:11:00.065640 containerd[1720]: time="2025-09-13T00:11:00.065596300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067177300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067352100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067379100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067402300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067427700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067448400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067518500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067547000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067596100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067620300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067644900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067678800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067713000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068304 containerd[1720]: time="2025-09-13T00:11:00.067748400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067768700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067793000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067827000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067852000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067874400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067908000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067929000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067955900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.067988900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.068012300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.068061900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.068093000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.068139900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.068164400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.068889 containerd[1720]: time="2025-09-13T00:11:00.068317000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.068531500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.068572100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.068595300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.068715300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.068737000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.068818500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.069134100Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:11:00.069368 containerd[1720]: time="2025-09-13T00:11:00.069181600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:11:00.069708 containerd[1720]: time="2025-09-13T00:11:00.069625600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:11:00.069896 containerd[1720]: time="2025-09-13T00:11:00.069728400Z" level=info msg="Connect containerd service" Sep 13 00:11:00.069896 containerd[1720]: time="2025-09-13T00:11:00.069777700Z" level=info msg="using legacy CRI server" Sep 13 00:11:00.069896 containerd[1720]: time="2025-09-13T00:11:00.069805900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:11:00.070145 containerd[1720]: time="2025-09-13T00:11:00.070120000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:11:00.071192 containerd[1720]: time="2025-09-13T00:11:00.071148000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:11:00.071326 containerd[1720]: time="2025-09-13T00:11:00.071293500Z" level=info msg="Start subscribing containerd event" Sep 13 00:11:00.071379 containerd[1720]: time="2025-09-13T00:11:00.071340500Z" level=info msg="Start recovering state" Sep 13 00:11:00.071457 containerd[1720]: time="2025-09-13T00:11:00.071438000Z" level=info msg="Start event monitor" Sep 13 00:11:00.071513 containerd[1720]: time="2025-09-13T00:11:00.071456900Z" level=info msg="Start snapshots syncer" Sep 13 00:11:00.071741 containerd[1720]: time="2025-09-13T00:11:00.071717200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:11:00.071818 containerd[1720]: time="2025-09-13T00:11:00.071780300Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:11:00.072767 containerd[1720]: time="2025-09-13T00:11:00.072492900Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:11:00.072767 containerd[1720]: time="2025-09-13T00:11:00.072516400Z" level=info msg="Start streaming server" Sep 13 00:11:00.072767 containerd[1720]: time="2025-09-13T00:11:00.072604000Z" level=info msg="containerd successfully booted in 0.065610s" Sep 13 00:11:00.072712 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:11:00.078000 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:11:00.083940 systemd[1]: Startup finished in 1.094s (kernel) + 11.922s (initrd) + 31.012s (userspace) = 44.029s. Sep 13 00:11:00.292053 kubelet[1841]: E0913 00:11:00.291912 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:00.294274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:00.294462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:00.295288 systemd[1]: kubelet.service: Consumed 1.021s CPU time. Sep 13 00:11:00.780314 login[1832]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 13 00:11:00.780677 login[1831]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:11:00.788964 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:11:00.803772 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:11:00.806868 systemd-logind[1699]: New session 1 of user core. Sep 13 00:11:00.828403 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:11:00.840802 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:11:00.853898 (systemd)[1863]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:11:01.080369 waagent[1827]: 2025-09-13T00:11:01.080198Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 13 00:11:01.084092 waagent[1827]: 2025-09-13T00:11:01.084005Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 13 00:11:01.088489 waagent[1827]: 2025-09-13T00:11:01.087111Z INFO Daemon Daemon Python: 3.11.9 Sep 13 00:11:01.090221 waagent[1827]: 2025-09-13T00:11:01.090145Z INFO Daemon Daemon Run daemon Sep 13 00:11:01.093076 waagent[1827]: 2025-09-13T00:11:01.093003Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 13 00:11:01.098778 waagent[1827]: 2025-09-13T00:11:01.098697Z INFO Daemon Daemon Using waagent for provisioning Sep 13 00:11:01.102796 waagent[1827]: 2025-09-13T00:11:01.102591Z INFO Daemon Daemon Activate resource disk Sep 13 00:11:01.105787 waagent[1827]: 2025-09-13T00:11:01.105715Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 00:11:01.116223 waagent[1827]: 2025-09-13T00:11:01.116163Z INFO Daemon Daemon Found device: None Sep 13 00:11:01.118997 waagent[1827]: 2025-09-13T00:11:01.118939Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 00:11:01.124193 waagent[1827]: 2025-09-13T00:11:01.124135Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 00:11:01.132710 waagent[1827]: 2025-09-13T00:11:01.132648Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:11:01.135940 waagent[1827]: 2025-09-13T00:11:01.135884Z INFO Daemon Daemon Running default provisioning handler Sep 13 00:11:01.150798 waagent[1827]: 2025-09-13T00:11:01.150732Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 13 00:11:01.160266 waagent[1827]: 2025-09-13T00:11:01.158430Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:11:01.163742 waagent[1827]: 2025-09-13T00:11:01.163684Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:11:01.166576 waagent[1827]: 2025-09-13T00:11:01.166521Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 00:11:01.187745 systemd[1863]: Queued start job for default target default.target. Sep 13 00:11:01.196546 systemd[1863]: Created slice app.slice - User Application Slice. Sep 13 00:11:01.196596 systemd[1863]: Reached target paths.target - Paths. Sep 13 00:11:01.196614 systemd[1863]: Reached target timers.target - Timers. Sep 13 00:11:01.198645 systemd[1863]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:11:01.217781 systemd[1863]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:11:01.218713 systemd[1863]: Reached target sockets.target - Sockets. Sep 13 00:11:01.218745 systemd[1863]: Reached target basic.target - Basic System. Sep 13 00:11:01.218795 systemd[1863]: Reached target default.target - Main User Target. Sep 13 00:11:01.218838 systemd[1863]: Startup finished in 358ms. Sep 13 00:11:01.219025 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:11:01.229698 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:11:01.269803 waagent[1827]: 2025-09-13T00:11:01.269698Z INFO Daemon Daemon Successfully mounted dvd Sep 13 00:11:01.304909 waagent[1827]: 2025-09-13T00:11:01.298404Z INFO Daemon Daemon Detect protocol endpoint Sep 13 00:11:01.304909 waagent[1827]: 2025-09-13T00:11:01.300527Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:11:01.304909 waagent[1827]: 2025-09-13T00:11:01.301050Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 00:11:01.304909 waagent[1827]: 2025-09-13T00:11:01.301546Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 00:11:01.304909 waagent[1827]: 2025-09-13T00:11:01.302316Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 00:11:01.304909 waagent[1827]: 2025-09-13T00:11:01.303227Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 00:11:01.317775 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 00:11:01.327725 waagent[1827]: 2025-09-13T00:11:01.327670Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 00:11:01.336865 waagent[1827]: 2025-09-13T00:11:01.329216Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 00:11:01.336865 waagent[1827]: 2025-09-13T00:11:01.330017Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 00:11:01.437968 waagent[1827]: 2025-09-13T00:11:01.437866Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 00:11:01.441583 waagent[1827]: 2025-09-13T00:11:01.441517Z INFO Daemon Daemon Forcing an update of the goal state. Sep 13 00:11:01.448894 waagent[1827]: 2025-09-13T00:11:01.448831Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:11:01.497356 waagent[1827]: 2025-09-13T00:11:01.497280Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 13 00:11:01.514677 waagent[1827]: 2025-09-13T00:11:01.499098Z INFO Daemon Sep 13 00:11:01.514677 waagent[1827]: 2025-09-13T00:11:01.501074Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e217684e-fd4c-4ea9-bc6c-e7ab4158ad9a eTag: 5580550218697068050 source: Fabric] Sep 13 00:11:01.514677 waagent[1827]: 2025-09-13T00:11:01.502639Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 13 00:11:01.514677 waagent[1827]: 2025-09-13T00:11:01.503371Z INFO Daemon Sep 13 00:11:01.514677 waagent[1827]: 2025-09-13T00:11:01.504358Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:11:01.517557 waagent[1827]: 2025-09-13T00:11:01.517509Z INFO Daemon Daemon Downloading artifacts profile blob Sep 13 00:11:01.683544 waagent[1827]: 2025-09-13T00:11:01.683367Z INFO Daemon Downloaded certificate {'thumbprint': 'FDD0E3393453210EF357D28CDE79E7E2E9F6D620', 'hasPrivateKey': True} Sep 13 00:11:01.689115 waagent[1827]: 2025-09-13T00:11:01.689052Z INFO Daemon Fetch goal state completed Sep 13 00:11:01.698162 waagent[1827]: 2025-09-13T00:11:01.698117Z INFO Daemon Daemon Starting provisioning Sep 13 00:11:01.701030 waagent[1827]: 2025-09-13T00:11:01.700974Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 00:11:01.706285 waagent[1827]: 2025-09-13T00:11:01.702097Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-78cb87e672] Sep 13 00:11:01.726571 waagent[1827]: 2025-09-13T00:11:01.726503Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-78cb87e672] Sep 13 00:11:01.735561 waagent[1827]: 2025-09-13T00:11:01.728055Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 00:11:01.735561 waagent[1827]: 2025-09-13T00:11:01.729006Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 00:11:01.751898 systemd-networkd[1578]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:11:01.751908 systemd-networkd[1578]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:11:01.751953 systemd-networkd[1578]: eth0: DHCP lease lost Sep 13 00:11:01.753201 waagent[1827]: 2025-09-13T00:11:01.753093Z INFO Daemon Daemon Create user account if not exists Sep 13 00:11:01.756395 systemd-networkd[1578]: eth0: DHCPv6 lease lost Sep 13 00:11:01.757578 waagent[1827]: 2025-09-13T00:11:01.756406Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 00:11:01.757578 waagent[1827]: 2025-09-13T00:11:01.757108Z INFO Daemon Daemon Configure sudoer Sep 13 00:11:01.757578 waagent[1827]: 2025-09-13T00:11:01.757497Z INFO Daemon Daemon Configure sshd Sep 13 00:11:01.757902 waagent[1827]: 2025-09-13T00:11:01.757851Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 13 00:11:01.758025 waagent[1827]: 2025-09-13T00:11:01.757985Z INFO Daemon Daemon Deploy ssh public key. Sep 13 00:11:01.781988 login[1832]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:11:01.786493 systemd-logind[1699]: New session 2 of user core. Sep 13 00:11:01.793642 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:11:01.813526 systemd-networkd[1578]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:11:10.521689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:11:10.528074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:11.313967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:11.318810 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:11.356784 kubelet[1919]: E0913 00:11:11.356685 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:11.360344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:11.360573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:21.521694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:11:21.526694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:21.627891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:21.632307 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:21.699156 chronyd[1714]: Selected source PHC0 Sep 13 00:11:22.315262 kubelet[1935]: E0913 00:11:22.276709 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:22.278876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:22.279036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:26.106648 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 13 00:11:31.856180 waagent[1827]: 2025-09-13T00:11:31.856112Z INFO Daemon Daemon Provisioning complete Sep 13 00:11:31.869400 waagent[1827]: 2025-09-13T00:11:31.869318Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 00:11:31.877906 waagent[1827]: 2025-09-13T00:11:31.870761Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 00:11:31.877906 waagent[1827]: 2025-09-13T00:11:31.871433Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 13 00:11:31.996252 waagent[1942]: 2025-09-13T00:11:31.996149Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 13 00:11:31.996706 waagent[1942]: 2025-09-13T00:11:31.996317Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 13 00:11:31.996706 waagent[1942]: 2025-09-13T00:11:31.996399Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 13 00:11:32.049614 waagent[1942]: 2025-09-13T00:11:32.049516Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:11:32.049860 waagent[1942]: 2025-09-13T00:11:32.049808Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:11:32.049960 waagent[1942]: 2025-09-13T00:11:32.049916Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:11:32.057894 waagent[1942]: 2025-09-13T00:11:32.057824Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:11:32.074447 waagent[1942]: 2025-09-13T00:11:32.074382Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 00:11:32.075021 waagent[1942]: 2025-09-13T00:11:32.074959Z INFO ExtHandler Sep 13 00:11:32.075117 waagent[1942]: 2025-09-13T00:11:32.075059Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2ceafc10-fcca-4dbf-ba76-8c28f21db873 eTag: 5580550218697068050 source: Fabric] Sep 13 00:11:32.075423 waagent[1942]: 2025-09-13T00:11:32.075371Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 00:11:32.076077 waagent[1942]: 2025-09-13T00:11:32.076019Z INFO ExtHandler Sep 13 00:11:32.076150 waagent[1942]: 2025-09-13T00:11:32.076107Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:11:32.079914 waagent[1942]: 2025-09-13T00:11:32.079870Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 00:11:32.149230 waagent[1942]: 2025-09-13T00:11:32.149088Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FDD0E3393453210EF357D28CDE79E7E2E9F6D620', 'hasPrivateKey': True} Sep 13 00:11:32.149751 waagent[1942]: 2025-09-13T00:11:32.149689Z INFO ExtHandler Fetch goal state completed Sep 13 00:11:32.164707 waagent[1942]: 2025-09-13T00:11:32.164641Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1942 Sep 13 00:11:32.164871 waagent[1942]: 2025-09-13T00:11:32.164820Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 13 00:11:32.166421 waagent[1942]: 2025-09-13T00:11:32.166361Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:11:32.166798 waagent[1942]: 2025-09-13T00:11:32.166747Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 00:11:32.210931 waagent[1942]: 2025-09-13T00:11:32.210881Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:11:32.211159 waagent[1942]: 2025-09-13T00:11:32.211112Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:11:32.217719 waagent[1942]: 2025-09-13T00:11:32.217674Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:11:32.224404 systemd[1]: Reloading requested from client PID 1955 ('systemctl') (unit waagent.service)... Sep 13 00:11:32.224422 systemd[1]: Reloading... Sep 13 00:11:32.318502 zram_generator::config[1992]: No configuration found. Sep 13 00:11:32.434821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:32.516459 systemd[1]: Reloading finished in 291 ms. Sep 13 00:11:32.543807 waagent[1942]: 2025-09-13T00:11:32.543702Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 13 00:11:32.544891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:11:32.553734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:32.558217 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit waagent.service)... Sep 13 00:11:32.558241 systemd[1]: Reloading... Sep 13 00:11:32.709549 zram_generator::config[2083]: No configuration found. Sep 13 00:11:32.825521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:32.907430 systemd[1]: Reloading finished in 348 ms. Sep 13 00:11:32.938558 waagent[1942]: 2025-09-13T00:11:32.936075Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 13 00:11:32.938558 waagent[1942]: 2025-09-13T00:11:32.936312Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 13 00:11:32.969961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:32.981786 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:33.400979 kubelet[2147]: E0913 00:11:33.400882 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:33.403343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:33.403553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:33.782584 waagent[1942]: 2025-09-13T00:11:33.782495Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 13 00:11:33.783245 waagent[1942]: 2025-09-13T00:11:33.783183Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 13 00:11:33.784054 waagent[1942]: 2025-09-13T00:11:33.783995Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:11:33.784183 waagent[1942]: 2025-09-13T00:11:33.784137Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:11:33.784370 waagent[1942]: 2025-09-13T00:11:33.784321Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:11:33.784818 waagent[1942]: 2025-09-13T00:11:33.784770Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:11:33.784951 waagent[1942]: 2025-09-13T00:11:33.784907Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:11:33.785269 waagent[1942]: 2025-09-13T00:11:33.785205Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:11:33.785826 waagent[1942]: 2025-09-13T00:11:33.785739Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:11:33.785945 waagent[1942]: 2025-09-13T00:11:33.785890Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:11:33.785945 waagent[1942]: 2025-09-13T00:11:33.785963Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:11:33.785945 waagent[1942]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:11:33.785945 waagent[1942]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:11:33.785945 waagent[1942]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:11:33.785945 waagent[1942]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:11:33.785945 waagent[1942]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:11:33.785945 waagent[1942]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:11:33.786321 waagent[1942]: 2025-09-13T00:11:33.786044Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:11:33.786504 waagent[1942]: 2025-09-13T00:11:33.786372Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:11:33.786780 waagent[1942]: 2025-09-13T00:11:33.786724Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:11:33.786848 waagent[1942]: 2025-09-13T00:11:33.786785Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:11:33.786918 waagent[1942]: 2025-09-13T00:11:33.786853Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:11:33.787516 waagent[1942]: 2025-09-13T00:11:33.787445Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:11:33.788275 waagent[1942]: 2025-09-13T00:11:33.788226Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:11:33.793374 waagent[1942]: 2025-09-13T00:11:33.793316Z INFO ExtHandler ExtHandler Sep 13 00:11:33.794690 waagent[1942]: 2025-09-13T00:11:33.794650Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 84224500-bc8c-424f-b65e-73a2c661a4f6 correlation ceec8226-0a3b-42f5-9599-0ca3970b7f21 created: 2025-09-13T00:09:37.069901Z] Sep 13 00:11:33.795046 waagent[1942]: 2025-09-13T00:11:33.794999Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 00:11:33.795566 waagent[1942]: 2025-09-13T00:11:33.795521Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Sep 13 00:11:33.851559 waagent[1942]: 2025-09-13T00:11:33.851375Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 45AEC3CC-8DA2-4A66-964A-61980B6CB2E6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 13 00:11:33.934932 waagent[1942]: 2025-09-13T00:11:33.934325Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:11:33.934932 waagent[1942]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:11:33.934932 waagent[1942]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:11:33.934932 waagent[1942]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:ab:3c brd ff:ff:ff:ff:ff:ff Sep 13 00:11:33.934932 waagent[1942]: 3: enP55828s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:df:ab:3c brd ff:ff:ff:ff:ff:ff\ altname enP55828p0s2 Sep 13 00:11:33.934932 waagent[1942]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:11:33.934932 waagent[1942]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:11:33.934932 waagent[1942]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:11:33.934932 waagent[1942]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:11:33.934932 waagent[1942]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 13 00:11:33.934932 waagent[1942]: 2: eth0 inet6 fe80::6245:bdff:fedf:ab3c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 13 00:11:33.979575 waagent[1942]: 2025-09-13T00:11:33.979506Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 13 00:11:33.979575 waagent[1942]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:11:33.979575 waagent[1942]: pkts bytes target prot opt in out source destination Sep 13 00:11:33.979575 waagent[1942]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:11:33.979575 waagent[1942]: pkts bytes target prot opt in out source destination Sep 13 00:11:33.979575 waagent[1942]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:11:33.979575 waagent[1942]: pkts bytes target prot opt in out source destination Sep 13 00:11:33.979575 waagent[1942]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:11:33.979575 waagent[1942]: 7 569 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:11:33.979575 waagent[1942]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:11:33.983120 waagent[1942]: 2025-09-13T00:11:33.983060Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 13 00:11:33.983120 waagent[1942]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:11:33.983120 waagent[1942]: pkts bytes target prot opt in out source destination Sep 13 00:11:33.983120 waagent[1942]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:11:33.983120 waagent[1942]: pkts bytes target prot opt in out source destination Sep 13 00:11:33.983120 waagent[1942]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:11:33.983120 waagent[1942]: pkts bytes target prot opt in out source destination Sep 13 00:11:33.983120 waagent[1942]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:11:33.983120 waagent[1942]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:11:33.983120 waagent[1942]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:11:33.983924 waagent[1942]: 2025-09-13T00:11:33.983874Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 00:11:43.081907 update_engine[1702]: I20250913 00:11:43.081815 1702 update_attempter.cc:509] Updating boot flags... Sep 13 00:11:43.143515 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2198) Sep 13 00:11:43.252696 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2191) Sep 13 00:11:43.521745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:11:43.526711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:43.632850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:43.645788 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:43.680592 kubelet[2260]: E0913 00:11:43.680511 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:43.682799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:43.683010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:53.771814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:11:53.779691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:53.915700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:53.920121 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:11:54.083900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:11:54.089774 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:43036.service - OpenSSH per-connection server daemon (10.200.16.10:43036). Sep 13 00:11:54.571781 kubelet[2275]: E0913 00:11:54.571723 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:11:54.574183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:11:54.574382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:11:55.248245 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 43036 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:11:55.249765 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:55.254099 systemd-logind[1699]: New session 3 of user core. Sep 13 00:11:55.263649 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:11:55.797278 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:43038.service - OpenSSH per-connection server daemon (10.200.16.10:43038). Sep 13 00:11:56.420733 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 43038 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:11:56.422201 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:56.427436 systemd-logind[1699]: New session 4 of user core. Sep 13 00:11:56.433645 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:11:56.863380 sshd[2288]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:56.866481 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:43038.service: Deactivated successfully. Sep 13 00:11:56.868331 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:11:56.869797 systemd-logind[1699]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:11:56.870884 systemd-logind[1699]: Removed session 4. Sep 13 00:11:56.976659 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:43054.service - OpenSSH per-connection server daemon (10.200.16.10:43054). Sep 13 00:11:57.596294 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 43054 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:11:57.597812 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:57.602900 systemd-logind[1699]: New session 5 of user core. Sep 13 00:11:57.611663 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:11:58.044429 sshd[2295]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:58.047611 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:43054.service: Deactivated successfully. Sep 13 00:11:58.049445 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:11:58.050883 systemd-logind[1699]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:11:58.051921 systemd-logind[1699]: Removed session 5. Sep 13 00:11:58.155501 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:43056.service - OpenSSH per-connection server daemon (10.200.16.10:43056). Sep 13 00:11:58.776640 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 43056 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:11:58.778097 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:58.783072 systemd-logind[1699]: New session 6 of user core. Sep 13 00:11:58.788643 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:11:59.221647 sshd[2302]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:59.224947 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:43056.service: Deactivated successfully. Sep 13 00:11:59.226843 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:11:59.228249 systemd-logind[1699]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:11:59.229247 systemd-logind[1699]: Removed session 6. Sep 13 00:11:59.335636 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:43062.service - OpenSSH per-connection server daemon (10.200.16.10:43062). Sep 13 00:11:59.959647 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 43062 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:11:59.961116 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:59.966031 systemd-logind[1699]: New session 7 of user core. Sep 13 00:11:59.970645 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:12:00.479989 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:12:00.480368 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:12:00.506837 sudo[2312]: pam_unix(sudo:session): session closed for user root Sep 13 00:12:00.607670 sshd[2309]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:00.611967 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:43062.service: Deactivated successfully. Sep 13 00:12:00.613757 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:12:00.614536 systemd-logind[1699]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:12:00.615605 systemd-logind[1699]: Removed session 7. Sep 13 00:12:00.717802 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:41058.service - OpenSSH per-connection server daemon (10.200.16.10:41058). Sep 13 00:12:01.338946 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 41058 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:12:01.340452 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:01.345395 systemd-logind[1699]: New session 8 of user core. Sep 13 00:12:01.350633 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:12:01.682029 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:12:01.682390 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:12:01.685945 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 13 00:12:01.691033 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:12:01.691385 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:12:01.709836 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:12:01.711598 auditctl[2324]: No rules Sep 13 00:12:01.711986 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:12:01.712216 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:12:01.714906 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:12:01.741413 augenrules[2342]: No rules Sep 13 00:12:01.742960 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:12:01.744429 sudo[2320]: pam_unix(sudo:session): session closed for user root Sep 13 00:12:01.847079 sshd[2317]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:01.850012 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:41058.service: Deactivated successfully. Sep 13 00:12:01.851930 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:12:01.853435 systemd-logind[1699]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:12:01.854411 systemd-logind[1699]: Removed session 8. Sep 13 00:12:01.960638 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:41066.service - OpenSSH per-connection server daemon (10.200.16.10:41066). Sep 13 00:12:02.581067 sshd[2350]: Accepted publickey for core from 10.200.16.10 port 41066 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:12:02.582563 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:02.586523 systemd-logind[1699]: New session 9 of user core. Sep 13 00:12:02.594605 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:12:02.923851 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:12:02.924211 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:12:04.771579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 00:12:04.776708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:05.642664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:05.646740 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:12:05.685394 kubelet[2371]: E0913 00:12:05.685334 2371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:12:05.688637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:12:05.688859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:12:08.453811 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:12:08.455034 (dockerd)[2384]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:12:13.093362 dockerd[2384]: time="2025-09-13T00:12:13.093302171Z" level=info msg="Starting up" Sep 13 00:12:15.632109 dockerd[2384]: time="2025-09-13T00:12:15.632060166Z" level=info msg="Loading containers: start." Sep 13 00:12:15.771499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Sep 13 00:12:15.776708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:16.023499 kernel: Initializing XFRM netlink socket Sep 13 00:12:16.531598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:16.536438 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:12:16.593360 systemd-networkd[1578]: docker0: Link UP Sep 13 00:12:16.598850 kubelet[2460]: E0913 00:12:16.598783 2460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:12:16.601062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:12:16.601274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:12:16.622801 dockerd[2384]: time="2025-09-13T00:12:16.622750773Z" level=info msg="Loading containers: done." Sep 13 00:12:16.691258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2794912917-merged.mount: Deactivated successfully. Sep 13 00:12:16.727439 dockerd[2384]: time="2025-09-13T00:12:16.727357721Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:12:16.728014 dockerd[2384]: time="2025-09-13T00:12:16.727533224Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:12:16.728014 dockerd[2384]: time="2025-09-13T00:12:16.727679926Z" level=info msg="Daemon has completed initialization" Sep 13 00:12:16.790709 dockerd[2384]: time="2025-09-13T00:12:16.790119610Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:12:16.790537 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:12:17.886098 containerd[1720]: time="2025-09-13T00:12:17.886053320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:12:18.641031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984268019.mount: Deactivated successfully. Sep 13 00:12:20.241741 containerd[1720]: time="2025-09-13T00:12:20.241686949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:20.244803 containerd[1720]: time="2025-09-13T00:12:20.244616897Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117132" Sep 13 00:12:20.247636 containerd[1720]: time="2025-09-13T00:12:20.247549444Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:20.252302 containerd[1720]: time="2025-09-13T00:12:20.251986216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:20.253016 containerd[1720]: time="2025-09-13T00:12:20.252978932Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.366886511s" Sep 13 00:12:20.253097 containerd[1720]: time="2025-09-13T00:12:20.253025033Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:12:20.253661 containerd[1720]: time="2025-09-13T00:12:20.253633043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:12:21.723345 containerd[1720]: time="2025-09-13T00:12:21.723287194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.726390 containerd[1720]: time="2025-09-13T00:12:21.726322943Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716640" Sep 13 00:12:21.729454 containerd[1720]: time="2025-09-13T00:12:21.729399193Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.735244 containerd[1720]: time="2025-09-13T00:12:21.735185587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:21.736403 containerd[1720]: time="2025-09-13T00:12:21.736253704Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.48258216s" Sep 13 00:12:21.736403 containerd[1720]: time="2025-09-13T00:12:21.736291105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:12:21.737127 containerd[1720]: time="2025-09-13T00:12:21.737095818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:12:23.072998 containerd[1720]: time="2025-09-13T00:12:23.072942397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:23.077239 containerd[1720]: time="2025-09-13T00:12:23.077015163Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787706" Sep 13 00:12:23.080605 containerd[1720]: time="2025-09-13T00:12:23.080563321Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:23.086691 containerd[1720]: time="2025-09-13T00:12:23.085331598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:23.086691 containerd[1720]: time="2025-09-13T00:12:23.086289714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.349158796s" Sep 13 00:12:23.086691 containerd[1720]: time="2025-09-13T00:12:23.086327714Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:12:23.087320 containerd[1720]: time="2025-09-13T00:12:23.087291330Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:12:24.447131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193311068.mount: Deactivated successfully. Sep 13 00:12:24.989399 containerd[1720]: time="2025-09-13T00:12:24.989340398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:24.992801 containerd[1720]: time="2025-09-13T00:12:24.992723353Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410260" Sep 13 00:12:24.995857 containerd[1720]: time="2025-09-13T00:12:24.995800403Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:25.000955 containerd[1720]: time="2025-09-13T00:12:25.000911686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:25.001635 containerd[1720]: time="2025-09-13T00:12:25.001502495Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.914177765s" Sep 13 00:12:25.001635 containerd[1720]: time="2025-09-13T00:12:25.001546596Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:12:25.006711 containerd[1720]: time="2025-09-13T00:12:25.006381175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:12:25.634847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983100958.mount: Deactivated successfully. Sep 13 00:12:26.771557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Sep 13 00:12:26.778751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:26.951790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:26.954951 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:12:27.013092 kubelet[2668]: E0913 00:12:27.013039 2668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:12:27.016215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:12:27.016402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:12:27.581736 containerd[1720]: time="2025-09-13T00:12:27.581674821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:27.584908 containerd[1720]: time="2025-09-13T00:12:27.584841082Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 13 00:12:27.588951 containerd[1720]: time="2025-09-13T00:12:27.588855960Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:27.601135 containerd[1720]: time="2025-09-13T00:12:27.601059096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:27.602364 containerd[1720]: time="2025-09-13T00:12:27.602190518Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.595767843s" Sep 13 00:12:27.602364 containerd[1720]: time="2025-09-13T00:12:27.602235319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:12:27.603354 containerd[1720]: time="2025-09-13T00:12:27.602975333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:12:28.594494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929216367.mount: Deactivated successfully. Sep 13 00:12:28.615779 containerd[1720]: time="2025-09-13T00:12:28.615724929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:28.618184 containerd[1720]: time="2025-09-13T00:12:28.617978073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 13 00:12:28.620898 containerd[1720]: time="2025-09-13T00:12:28.620841028Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:28.624819 containerd[1720]: time="2025-09-13T00:12:28.624768604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:28.625517 containerd[1720]: time="2025-09-13T00:12:28.625478218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.022455384s" Sep 13 00:12:28.625603 containerd[1720]: time="2025-09-13T00:12:28.625517818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:12:28.626340 containerd[1720]: time="2025-09-13T00:12:28.626119330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:12:29.236142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624581925.mount: Deactivated successfully. Sep 13 00:12:31.564283 containerd[1720]: time="2025-09-13T00:12:31.564232881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:31.567077 containerd[1720]: time="2025-09-13T00:12:31.567015634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 13 00:12:31.570181 containerd[1720]: time="2025-09-13T00:12:31.570125395Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:31.574812 containerd[1720]: time="2025-09-13T00:12:31.574761984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:31.576215 containerd[1720]: time="2025-09-13T00:12:31.576176312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.950027981s" Sep 13 00:12:31.576215 containerd[1720]: time="2025-09-13T00:12:31.576211112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:12:35.073442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:35.081803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:35.108969 systemd[1]: Reloading requested from client PID 2760 ('systemctl') (unit session-9.scope)... Sep 13 00:12:35.108985 systemd[1]: Reloading... Sep 13 00:12:35.203705 zram_generator::config[2796]: No configuration found. Sep 13 00:12:35.330050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:12:35.417544 systemd[1]: Reloading finished in 308 ms. Sep 13 00:12:35.653589 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:12:35.653707 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:12:35.654008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:35.659926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:40.168710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:40.178875 (kubelet)[2867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:12:40.219990 kubelet[2867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:12:40.219990 kubelet[2867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:12:40.219990 kubelet[2867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:12:40.220506 kubelet[2867]: I0913 00:12:40.220064 2867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:12:40.531764 kubelet[2867]: I0913 00:12:40.531719 2867 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:12:40.531764 kubelet[2867]: I0913 00:12:40.531751 2867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:12:40.532078 kubelet[2867]: I0913 00:12:40.532055 2867 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:12:40.559680 kubelet[2867]: E0913 00:12:40.559628 2867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:40.560512 kubelet[2867]: I0913 00:12:40.560484 2867 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:12:40.568170 kubelet[2867]: E0913 00:12:40.568103 2867 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:12:40.568170 kubelet[2867]: I0913 00:12:40.568144 2867 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:12:40.572938 kubelet[2867]: I0913 00:12:40.572910 2867 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:12:40.573869 kubelet[2867]: I0913 00:12:40.573845 2867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:12:40.574076 kubelet[2867]: I0913 00:12:40.574027 2867 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:12:40.574257 kubelet[2867]: I0913 00:12:40.574073 2867 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-78cb87e672","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:12:40.574414 kubelet[2867]: I0913 00:12:40.574269 2867 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:12:40.574414 kubelet[2867]: I0913 00:12:40.574281 2867 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:12:40.574513 kubelet[2867]: I0913 00:12:40.574415 2867 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:12:40.577500 kubelet[2867]: I0913 00:12:40.577155 2867 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:12:40.577500 kubelet[2867]: I0913 00:12:40.577189 2867 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:12:40.577500 kubelet[2867]: I0913 00:12:40.577229 2867 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:12:40.577500 kubelet[2867]: I0913 00:12:40.577250 2867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:12:40.583940 kubelet[2867]: W0913 00:12:40.583666 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-78cb87e672&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:40.583940 kubelet[2867]: E0913 00:12:40.583779 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-78cb87e672&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:40.585041 kubelet[2867]: W0913 00:12:40.584989 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:40.585143 kubelet[2867]: E0913 00:12:40.585042 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:40.585143 kubelet[2867]: I0913 00:12:40.585132 2867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:12:40.585557 kubelet[2867]: I0913 00:12:40.585532 2867 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:12:40.586494 kubelet[2867]: W0913 00:12:40.586321 2867 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:12:40.589338 kubelet[2867]: I0913 00:12:40.589316 2867 server.go:1274] "Started kubelet" Sep 13 00:12:40.594961 kubelet[2867]: I0913 00:12:40.594931 2867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:12:40.599012 kubelet[2867]: I0913 00:12:40.598974 2867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:12:40.600402 kubelet[2867]: I0913 00:12:40.600374 2867 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:12:40.605599 kubelet[2867]: I0913 00:12:40.605559 2867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:12:40.605871 kubelet[2867]: I0913 00:12:40.605812 2867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:12:40.607913 kubelet[2867]: I0913 00:12:40.607810 2867 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:12:40.609008 kubelet[2867]: I0913 00:12:40.608990 2867 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:12:40.609130 kubelet[2867]: E0913 00:12:40.609109 2867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:40.610590 kubelet[2867]: E0913 00:12:40.609994 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-78cb87e672?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Sep 13 00:12:40.610590 kubelet[2867]: I0913 00:12:40.610290 2867 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:12:40.612088 kubelet[2867]: E0913 00:12:40.609372 2867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-78cb87e672.1864af2c94af6dd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-78cb87e672,UID:ci-4081.3.5-n-78cb87e672,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-78cb87e672,},FirstTimestamp:2025-09-13 00:12:40.589282771 +0000 UTC m=+0.406444371,LastTimestamp:2025-09-13 00:12:40.589282771 +0000 UTC m=+0.406444371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-78cb87e672,}" Sep 13 00:12:40.612088 kubelet[2867]: I0913 00:12:40.611444 2867 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:12:40.612834 kubelet[2867]: I0913 00:12:40.612813 2867 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:12:40.613055 kubelet[2867]: I0913 00:12:40.613024 2867 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:12:40.615046 kubelet[2867]: I0913 00:12:40.615026 2867 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:12:40.620157 kubelet[2867]: W0913 00:12:40.620101 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:40.620256 kubelet[2867]: E0913 00:12:40.620163 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:40.697528 kubelet[2867]: I0913 00:12:40.697482 2867 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:12:40.697528 kubelet[2867]: I0913 00:12:40.697511 2867 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:12:40.697528 kubelet[2867]: I0913 00:12:40.697537 2867 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:12:40.709526 kubelet[2867]: E0913 00:12:40.709458 2867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:40.810087 kubelet[2867]: E0913 00:12:40.809962 2867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:40.811520 kubelet[2867]: E0913 00:12:40.811375 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-78cb87e672?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Sep 13 00:12:40.910998 kubelet[2867]: E0913 00:12:40.910932 2867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:40.966544 kubelet[2867]: I0913 00:12:40.966509 2867 policy_none.go:49] "None policy: Start" Sep 13 00:12:40.967426 kubelet[2867]: I0913 00:12:40.967399 2867 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:12:40.967545 kubelet[2867]: I0913 00:12:40.967444 2867 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:12:41.012103 kubelet[2867]: E0913 00:12:41.012049 2867 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:41.025110 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:12:41.034657 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:12:41.040385 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:12:41.050450 kubelet[2867]: I0913 00:12:41.050342 2867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:12:41.050919 kubelet[2867]: I0913 00:12:41.050597 2867 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:12:41.050919 kubelet[2867]: I0913 00:12:41.050616 2867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:12:41.050919 kubelet[2867]: I0913 00:12:41.050898 2867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:12:41.053359 kubelet[2867]: E0913 00:12:41.053329 2867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:41.079564 kubelet[2867]: I0913 00:12:41.079282 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:12:41.082395 kubelet[2867]: I0913 00:12:41.082357 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:12:41.082395 kubelet[2867]: I0913 00:12:41.082393 2867 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:12:41.082682 kubelet[2867]: I0913 00:12:41.082416 2867 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:12:41.082682 kubelet[2867]: E0913 00:12:41.082539 2867 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:12:41.084550 kubelet[2867]: W0913 00:12:41.084516 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:41.084683 kubelet[2867]: E0913 00:12:41.084568 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:41.153957 kubelet[2867]: I0913 00:12:41.153918 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.154330 kubelet[2867]: E0913 00:12:41.154300 2867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.194407 systemd[1]: Created slice kubepods-burstable-podf526ac3f1b2f8c5eb9b2019a4fc89ff4.slice - libcontainer container kubepods-burstable-podf526ac3f1b2f8c5eb9b2019a4fc89ff4.slice. Sep 13 00:12:41.213045 kubelet[2867]: E0913 00:12:41.212997 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-78cb87e672?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Sep 13 00:12:41.213186 kubelet[2867]: I0913 00:12:41.213110 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f526ac3f1b2f8c5eb9b2019a4fc89ff4-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-78cb87e672\" (UID: \"f526ac3f1b2f8c5eb9b2019a4fc89ff4\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213186 kubelet[2867]: I0913 00:12:41.213144 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f526ac3f1b2f8c5eb9b2019a4fc89ff4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-78cb87e672\" (UID: \"f526ac3f1b2f8c5eb9b2019a4fc89ff4\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213289 kubelet[2867]: I0913 00:12:41.213210 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213289 kubelet[2867]: I0913 00:12:41.213235 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213289 kubelet[2867]: I0913 00:12:41.213259 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213289 kubelet[2867]: I0913 00:12:41.213284 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f526ac3f1b2f8c5eb9b2019a4fc89ff4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-78cb87e672\" (UID: \"f526ac3f1b2f8c5eb9b2019a4fc89ff4\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213453 kubelet[2867]: I0913 00:12:41.213306 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213453 kubelet[2867]: I0913 00:12:41.213341 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.213453 kubelet[2867]: I0913 00:12:41.213365 2867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fe7bea717221a2fdbab7faf5ff08276-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-78cb87e672\" (UID: \"3fe7bea717221a2fdbab7faf5ff08276\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.214950 systemd[1]: Created slice kubepods-burstable-pod00186464ea83fdda24286073f4a86de7.slice - libcontainer container kubepods-burstable-pod00186464ea83fdda24286073f4a86de7.slice. Sep 13 00:12:41.219267 systemd[1]: Created slice kubepods-burstable-pod3fe7bea717221a2fdbab7faf5ff08276.slice - libcontainer container kubepods-burstable-pod3fe7bea717221a2fdbab7faf5ff08276.slice. Sep 13 00:12:41.357368 kubelet[2867]: I0913 00:12:41.357016 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.357368 kubelet[2867]: E0913 00:12:41.357339 2867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.512113 containerd[1720]: time="2025-09-13T00:12:41.512064442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-78cb87e672,Uid:f526ac3f1b2f8c5eb9b2019a4fc89ff4,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:41.518261 containerd[1720]: time="2025-09-13T00:12:41.518219344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-78cb87e672,Uid:00186464ea83fdda24286073f4a86de7,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:41.522806 containerd[1720]: time="2025-09-13T00:12:41.522765620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-78cb87e672,Uid:3fe7bea717221a2fdbab7faf5ff08276,Namespace:kube-system,Attempt:0,}" Sep 13 00:12:41.697343 kubelet[2867]: W0913 00:12:41.697215 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:41.697343 kubelet[2867]: E0913 00:12:41.697272 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:41.760010 kubelet[2867]: I0913 00:12:41.759974 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.760336 kubelet[2867]: E0913 00:12:41.760302 2867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:41.955666 kubelet[2867]: W0913 00:12:41.955533 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:41.955666 kubelet[2867]: E0913 00:12:41.955585 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:42.013594 kubelet[2867]: E0913 00:12:42.013535 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-78cb87e672?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Sep 13 00:12:42.013594 kubelet[2867]: W0913 00:12:42.013538 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:42.013777 kubelet[2867]: E0913 00:12:42.013608 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:42.047657 kubelet[2867]: W0913 00:12:42.047592 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-78cb87e672&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:42.047808 kubelet[2867]: E0913 00:12:42.047664 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-78cb87e672&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:42.562180 kubelet[2867]: I0913 00:12:42.562148 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:42.562637 kubelet[2867]: E0913 00:12:42.562585 2867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:42.757897 kubelet[2867]: E0913 00:12:42.757831 2867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:43.618670 kubelet[2867]: E0913 00:12:43.617390 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-78cb87e672?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="3.2s" Sep 13 00:12:44.017231 kubelet[2867]: W0913 00:12:44.017068 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:44.017231 kubelet[2867]: E0913 00:12:44.017148 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:44.164374 kubelet[2867]: I0913 00:12:44.164340 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:44.164754 kubelet[2867]: E0913 00:12:44.164717 2867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:44.272116 kubelet[2867]: W0913 00:12:44.272052 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:44.272255 kubelet[2867]: E0913 00:12:44.272129 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:44.632838 kubelet[2867]: W0913 00:12:44.632688 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:44.632838 kubelet[2867]: E0913 00:12:44.632763 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:45.131121 kubelet[2867]: W0913 00:12:45.131052 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-78cb87e672&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:45.131283 kubelet[2867]: E0913 00:12:45.131128 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-78cb87e672&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:46.818267 kubelet[2867]: E0913 00:12:46.818209 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-78cb87e672?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="6.4s" Sep 13 00:12:46.957188 kubelet[2867]: E0913 00:12:46.957143 2867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:47.366533 kubelet[2867]: I0913 00:12:47.366503 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:47.366922 kubelet[2867]: E0913 00:12:47.366881 2867 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:47.964611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992731852.mount: Deactivated successfully. Sep 13 00:12:48.003353 containerd[1720]: time="2025-09-13T00:12:48.003292662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:12:48.014656 containerd[1720]: time="2025-09-13T00:12:48.014507160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 13 00:12:48.017393 containerd[1720]: time="2025-09-13T00:12:48.017346810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:12:48.020655 containerd[1720]: time="2025-09-13T00:12:48.020615667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:12:48.022960 containerd[1720]: time="2025-09-13T00:12:48.022919208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:12:48.027530 containerd[1720]: time="2025-09-13T00:12:48.027490989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:12:48.030831 containerd[1720]: time="2025-09-13T00:12:48.030780046Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:12:48.045922 containerd[1720]: time="2025-09-13T00:12:48.045843412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:12:48.047213 containerd[1720]: time="2025-09-13T00:12:48.046640026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.52833838s" Sep 13 00:12:48.048185 containerd[1720]: time="2025-09-13T00:12:48.048148452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.535998209s" Sep 13 00:12:48.053925 containerd[1720]: time="2025-09-13T00:12:48.053866453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 6.531026632s" Sep 13 00:12:48.513801 kubelet[2867]: W0913 00:12:48.513755 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:48.513801 kubelet[2867]: E0913 00:12:48.513806 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:48.801349 containerd[1720]: time="2025-09-13T00:12:48.800674906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:48.801349 containerd[1720]: time="2025-09-13T00:12:48.801125914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:48.801349 containerd[1720]: time="2025-09-13T00:12:48.801151414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:48.801349 containerd[1720]: time="2025-09-13T00:12:48.801255116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:48.803402 containerd[1720]: time="2025-09-13T00:12:48.803316352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:48.803614 containerd[1720]: time="2025-09-13T00:12:48.803453855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:48.803768 containerd[1720]: time="2025-09-13T00:12:48.803724459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:48.806110 containerd[1720]: time="2025-09-13T00:12:48.805810896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:48.808388 containerd[1720]: time="2025-09-13T00:12:48.808271940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:48.809436 containerd[1720]: time="2025-09-13T00:12:48.808456143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:48.809436 containerd[1720]: time="2025-09-13T00:12:48.809213556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:48.809436 containerd[1720]: time="2025-09-13T00:12:48.809333558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:48.818416 kubelet[2867]: E0913 00:12:48.818275 2867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-78cb87e672.1864af2c94af6dd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-78cb87e672,UID:ci-4081.3.5-n-78cb87e672,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-78cb87e672,},FirstTimestamp:2025-09-13 00:12:40.589282771 +0000 UTC m=+0.406444371,LastTimestamp:2025-09-13 00:12:40.589282771 +0000 UTC m=+0.406444371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-78cb87e672,}" Sep 13 00:12:48.860715 systemd[1]: Started cri-containerd-52a403320138e91fd0aece4998994635716e4c069d4b6c63d3a54e1c8f6c541a.scope - libcontainer container 52a403320138e91fd0aece4998994635716e4c069d4b6c63d3a54e1c8f6c541a. Sep 13 00:12:48.867340 systemd[1]: Started cri-containerd-0366175912b51d6f6d07dfa4b82ecb555e054770f3aa0d7a06fb0297086ed240.scope - libcontainer container 0366175912b51d6f6d07dfa4b82ecb555e054770f3aa0d7a06fb0297086ed240. Sep 13 00:12:48.869805 systemd[1]: Started cri-containerd-4fad557b39bc5491fc6f409816a30fb0bd417e702cd87d83975fd142f2ac5cd5.scope - libcontainer container 4fad557b39bc5491fc6f409816a30fb0bd417e702cd87d83975fd142f2ac5cd5. Sep 13 00:12:48.953819 containerd[1720]: time="2025-09-13T00:12:48.953763202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-78cb87e672,Uid:00186464ea83fdda24286073f4a86de7,Namespace:kube-system,Attempt:0,} returns sandbox id \"52a403320138e91fd0aece4998994635716e4c069d4b6c63d3a54e1c8f6c541a\"" Sep 13 00:12:48.965767 containerd[1720]: time="2025-09-13T00:12:48.965401907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-78cb87e672,Uid:3fe7bea717221a2fdbab7faf5ff08276,Namespace:kube-system,Attempt:0,} returns sandbox id \"0366175912b51d6f6d07dfa4b82ecb555e054770f3aa0d7a06fb0297086ed240\"" Sep 13 00:12:48.967761 containerd[1720]: time="2025-09-13T00:12:48.967286640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-78cb87e672,Uid:f526ac3f1b2f8c5eb9b2019a4fc89ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fad557b39bc5491fc6f409816a30fb0bd417e702cd87d83975fd142f2ac5cd5\"" Sep 13 00:12:48.969702 containerd[1720]: time="2025-09-13T00:12:48.969638682Z" level=info msg="CreateContainer within sandbox \"52a403320138e91fd0aece4998994635716e4c069d4b6c63d3a54e1c8f6c541a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:12:48.970890 containerd[1720]: time="2025-09-13T00:12:48.970740701Z" level=info msg="CreateContainer within sandbox \"0366175912b51d6f6d07dfa4b82ecb555e054770f3aa0d7a06fb0297086ed240\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:12:48.971576 containerd[1720]: time="2025-09-13T00:12:48.971547815Z" level=info msg="CreateContainer within sandbox \"4fad557b39bc5491fc6f409816a30fb0bd417e702cd87d83975fd142f2ac5cd5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:12:49.019028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988385824.mount: Deactivated successfully. Sep 13 00:12:49.023056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171492290.mount: Deactivated successfully. Sep 13 00:12:49.033833 containerd[1720]: time="2025-09-13T00:12:49.033783511Z" level=info msg="CreateContainer within sandbox \"52a403320138e91fd0aece4998994635716e4c069d4b6c63d3a54e1c8f6c541a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"987ccdd6639bd1f52e76cd829afb3b9ab8b69f9a6bc8d3e16f1606ac246deab6\"" Sep 13 00:12:49.034620 containerd[1720]: time="2025-09-13T00:12:49.034591725Z" level=info msg="StartContainer for \"987ccdd6639bd1f52e76cd829afb3b9ab8b69f9a6bc8d3e16f1606ac246deab6\"" Sep 13 00:12:49.065653 systemd[1]: Started cri-containerd-987ccdd6639bd1f52e76cd829afb3b9ab8b69f9a6bc8d3e16f1606ac246deab6.scope - libcontainer container 987ccdd6639bd1f52e76cd829afb3b9ab8b69f9a6bc8d3e16f1606ac246deab6. Sep 13 00:12:49.067935 containerd[1720]: time="2025-09-13T00:12:49.067154699Z" level=info msg="CreateContainer within sandbox \"4fad557b39bc5491fc6f409816a30fb0bd417e702cd87d83975fd142f2ac5cd5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcd5478f08f070b4610ed381f6e69ef4a0d28ab6314cb40d4bfc3cc5e20f82ea\"" Sep 13 00:12:49.068578 containerd[1720]: time="2025-09-13T00:12:49.068543423Z" level=info msg="StartContainer for \"fcd5478f08f070b4610ed381f6e69ef4a0d28ab6314cb40d4bfc3cc5e20f82ea\"" Sep 13 00:12:49.072668 containerd[1720]: time="2025-09-13T00:12:49.072625495Z" level=info msg="CreateContainer within sandbox \"0366175912b51d6f6d07dfa4b82ecb555e054770f3aa0d7a06fb0297086ed240\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf2578a272fddf8af3de8df41a1bdf7621c25589578555880987989567af2b86\"" Sep 13 00:12:49.073508 containerd[1720]: time="2025-09-13T00:12:49.073387009Z" level=info msg="StartContainer for \"cf2578a272fddf8af3de8df41a1bdf7621c25589578555880987989567af2b86\"" Sep 13 00:12:49.140443 systemd[1]: Started cri-containerd-cf2578a272fddf8af3de8df41a1bdf7621c25589578555880987989567af2b86.scope - libcontainer container cf2578a272fddf8af3de8df41a1bdf7621c25589578555880987989567af2b86. Sep 13 00:12:49.143615 systemd[1]: Started cri-containerd-fcd5478f08f070b4610ed381f6e69ef4a0d28ab6314cb40d4bfc3cc5e20f82ea.scope - libcontainer container fcd5478f08f070b4610ed381f6e69ef4a0d28ab6314cb40d4bfc3cc5e20f82ea. Sep 13 00:12:49.153490 containerd[1720]: time="2025-09-13T00:12:49.153034311Z" level=info msg="StartContainer for \"987ccdd6639bd1f52e76cd829afb3b9ab8b69f9a6bc8d3e16f1606ac246deab6\" returns successfully" Sep 13 00:12:49.188103 kubelet[2867]: W0913 00:12:49.188060 2867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Sep 13 00:12:49.189575 kubelet[2867]: E0913 00:12:49.189518 2867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.12:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:12:49.225189 containerd[1720]: time="2025-09-13T00:12:49.225135781Z" level=info msg="StartContainer for \"fcd5478f08f070b4610ed381f6e69ef4a0d28ab6314cb40d4bfc3cc5e20f82ea\" returns successfully" Sep 13 00:12:49.265948 containerd[1720]: time="2025-09-13T00:12:49.265885399Z" level=info msg="StartContainer for \"cf2578a272fddf8af3de8df41a1bdf7621c25589578555880987989567af2b86\" returns successfully" Sep 13 00:12:51.053595 kubelet[2867]: E0913 00:12:51.053552 2867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-78cb87e672\" not found" Sep 13 00:12:51.590076 kubelet[2867]: I0913 00:12:51.590030 2867 apiserver.go:52] "Watching apiserver" Sep 13 00:12:51.711313 kubelet[2867]: I0913 00:12:51.711267 2867 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:12:52.037855 kubelet[2867]: E0913 00:12:52.037815 2867 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.5-n-78cb87e672" not found Sep 13 00:12:52.388725 kubelet[2867]: E0913 00:12:52.388601 2867 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.5-n-78cb87e672" not found Sep 13 00:12:52.823794 kubelet[2867]: E0913 00:12:52.823747 2867 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.5-n-78cb87e672" not found Sep 13 00:12:53.223251 kubelet[2867]: E0913 00:12:53.223094 2867 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-78cb87e672\" not found" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:53.723226 kubelet[2867]: E0913 00:12:53.723185 2867 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.5-n-78cb87e672" not found Sep 13 00:12:53.769842 kubelet[2867]: I0913 00:12:53.769528 2867 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:53.790387 kubelet[2867]: I0913 00:12:53.790350 2867 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:54.339258 systemd[1]: Reloading requested from client PID 3145 ('systemctl') (unit session-9.scope)... Sep 13 00:12:54.339275 systemd[1]: Reloading... Sep 13 00:12:54.461569 zram_generator::config[3197]: No configuration found. Sep 13 00:12:54.630641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:12:54.727753 systemd[1]: Reloading finished in 387 ms. Sep 13 00:12:54.770209 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:54.787854 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:12:54.788137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:54.795039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:12:55.081297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:12:55.093881 (kubelet)[3252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:12:55.131232 kubelet[3252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:12:55.131232 kubelet[3252]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:12:55.131232 kubelet[3252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:12:55.131727 kubelet[3252]: I0913 00:12:55.131335 3252 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:12:55.136988 kubelet[3252]: I0913 00:12:55.136950 3252 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:12:55.136988 kubelet[3252]: I0913 00:12:55.136977 3252 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:12:55.137237 kubelet[3252]: I0913 00:12:55.137217 3252 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:12:55.139977 kubelet[3252]: I0913 00:12:55.139939 3252 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:12:55.143684 kubelet[3252]: I0913 00:12:55.143489 3252 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:12:55.146918 kubelet[3252]: E0913 00:12:55.146887 3252 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:12:55.146918 kubelet[3252]: I0913 00:12:55.146915 3252 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:12:55.151508 kubelet[3252]: I0913 00:12:55.150684 3252 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:12:55.151508 kubelet[3252]: I0913 00:12:55.150796 3252 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:12:55.151508 kubelet[3252]: I0913 00:12:55.150923 3252 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:12:55.151508 kubelet[3252]: I0913 00:12:55.150950 3252 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-78cb87e672","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151101 3252 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151109 3252 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151134 3252 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151221 3252 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151236 3252 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151260 3252 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:12:55.151804 kubelet[3252]: I0913 00:12:55.151269 3252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:12:55.156117 kubelet[3252]: I0913 00:12:55.156096 3252 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:12:55.158423 kubelet[3252]: I0913 00:12:55.158403 3252 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:12:55.159140 kubelet[3252]: I0913 00:12:55.159012 3252 server.go:1274] "Started kubelet" Sep 13 00:12:55.164805 kubelet[3252]: I0913 00:12:55.164785 3252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:12:55.176831 kubelet[3252]: I0913 00:12:55.176754 3252 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:12:55.184110 kubelet[3252]: I0913 00:12:55.184071 3252 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:12:55.184365 kubelet[3252]: I0913 00:12:55.184350 3252 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:12:55.185773 kubelet[3252]: I0913 00:12:55.185735 3252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:12:55.186295 kubelet[3252]: I0913 00:12:55.186082 3252 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:12:55.187032 kubelet[3252]: I0913 00:12:55.187007 3252 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:12:55.189077 kubelet[3252]: I0913 00:12:55.189052 3252 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:12:55.189946 kubelet[3252]: I0913 00:12:55.189192 3252 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:12:55.190093 kubelet[3252]: I0913 00:12:55.190069 3252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:12:55.191774 kubelet[3252]: I0913 00:12:55.191753 3252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:12:55.191884 kubelet[3252]: I0913 00:12:55.191873 3252 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:12:55.191965 kubelet[3252]: I0913 00:12:55.191955 3252 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:12:55.192088 kubelet[3252]: E0913 00:12:55.192070 3252 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:12:55.195908 kubelet[3252]: I0913 00:12:55.195856 3252 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:12:55.196394 kubelet[3252]: I0913 00:12:55.196309 3252 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:12:55.202010 kubelet[3252]: I0913 00:12:55.201986 3252 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:12:55.574753 kubelet[3252]: E0913 00:12:55.573896 3252 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:12:55.578012 kubelet[3252]: I0913 00:12:55.577984 3252 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:12:55.578012 kubelet[3252]: I0913 00:12:55.578003 3252 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:12:55.578183 kubelet[3252]: I0913 00:12:55.578026 3252 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:12:55.578228 kubelet[3252]: I0913 00:12:55.578205 3252 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:12:55.578266 kubelet[3252]: I0913 00:12:55.578219 3252 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:12:55.578266 kubelet[3252]: I0913 00:12:55.578244 3252 policy_none.go:49] "None policy: Start" Sep 13 00:12:55.579864 kubelet[3252]: I0913 00:12:55.578892 3252 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:12:55.579864 kubelet[3252]: I0913 00:12:55.578917 3252 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:12:55.579864 kubelet[3252]: I0913 00:12:55.579092 3252 state_mem.go:75] "Updated machine memory state" Sep 13 00:12:55.585667 kubelet[3252]: I0913 00:12:55.585632 3252 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:12:55.586025 kubelet[3252]: I0913 00:12:55.585836 3252 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:12:55.586025 kubelet[3252]: I0913 00:12:55.585854 3252 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:12:55.586623 kubelet[3252]: I0913 00:12:55.586584 3252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:12:55.699411 kubelet[3252]: I0913 00:12:55.699209 3252 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.714029 kubelet[3252]: I0913 00:12:55.713721 3252 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.714029 kubelet[3252]: I0913 00:12:55.713820 3252 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.788828 kubelet[3252]: W0913 00:12:55.788619 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:12:55.794134 kubelet[3252]: W0913 00:12:55.793902 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:12:55.794868 kubelet[3252]: W0913 00:12:55.794826 3252 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:12:55.873338 kubelet[3252]: I0913 00:12:55.872349 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fe7bea717221a2fdbab7faf5ff08276-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-78cb87e672\" (UID: \"3fe7bea717221a2fdbab7faf5ff08276\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873338 kubelet[3252]: I0913 00:12:55.872390 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f526ac3f1b2f8c5eb9b2019a4fc89ff4-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-78cb87e672\" (UID: \"f526ac3f1b2f8c5eb9b2019a4fc89ff4\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873338 kubelet[3252]: I0913 00:12:55.872415 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f526ac3f1b2f8c5eb9b2019a4fc89ff4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-78cb87e672\" (UID: \"f526ac3f1b2f8c5eb9b2019a4fc89ff4\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873338 kubelet[3252]: I0913 00:12:55.872436 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873338 kubelet[3252]: I0913 00:12:55.872460 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873652 kubelet[3252]: I0913 00:12:55.872516 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873652 kubelet[3252]: I0913 00:12:55.872552 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f526ac3f1b2f8c5eb9b2019a4fc89ff4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-78cb87e672\" (UID: \"f526ac3f1b2f8c5eb9b2019a4fc89ff4\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873652 kubelet[3252]: I0913 00:12:55.872574 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:55.873652 kubelet[3252]: I0913 00:12:55.872593 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00186464ea83fdda24286073f4a86de7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-78cb87e672\" (UID: \"00186464ea83fdda24286073f4a86de7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" Sep 13 00:12:56.153691 kubelet[3252]: I0913 00:12:56.152977 3252 apiserver.go:52] "Watching apiserver" Sep 13 00:12:56.189612 kubelet[3252]: I0913 00:12:56.189549 3252 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:12:56.269904 kubelet[3252]: I0913 00:12:56.269814 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-78cb87e672" podStartSLOduration=1.269790532 podStartE2EDuration="1.269790532s" podCreationTimestamp="2025-09-13 00:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:56.257866736 +0000 UTC m=+1.159898254" watchObservedRunningTime="2025-09-13 00:12:56.269790532 +0000 UTC m=+1.171821950" Sep 13 00:12:56.281486 kubelet[3252]: I0913 00:12:56.281266 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-78cb87e672" podStartSLOduration=1.28124792 podStartE2EDuration="1.28124792s" podCreationTimestamp="2025-09-13 00:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:56.270127437 +0000 UTC m=+1.172158955" watchObservedRunningTime="2025-09-13 00:12:56.28124792 +0000 UTC m=+1.183279338" Sep 13 00:12:56.293113 kubelet[3252]: I0913 00:12:56.293047 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-78cb87e672" podStartSLOduration=1.293030114 podStartE2EDuration="1.293030114s" podCreationTimestamp="2025-09-13 00:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:56.281446223 +0000 UTC m=+1.183477641" watchObservedRunningTime="2025-09-13 00:12:56.293030114 +0000 UTC m=+1.195061632" Sep 13 00:12:59.133812 kubelet[3252]: I0913 00:12:59.133766 3252 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:12:59.134314 containerd[1720]: time="2025-09-13T00:12:59.134167684Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:12:59.134745 kubelet[3252]: I0913 00:12:59.134458 3252 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:13:00.025950 systemd[1]: Created slice kubepods-besteffort-poda01642a9_eb26_4631_a2d1_a3dfc0c34101.slice - libcontainer container kubepods-besteffort-poda01642a9_eb26_4631_a2d1_a3dfc0c34101.slice. Sep 13 00:13:00.176736 systemd[1]: Created slice kubepods-besteffort-pod5fa6eb7d_d032_45a9_a4c9_4b711af3c2a5.slice - libcontainer container kubepods-besteffort-pod5fa6eb7d_d032_45a9_a4c9_4b711af3c2a5.slice. Sep 13 00:13:00.199918 kubelet[3252]: I0913 00:13:00.199745 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a01642a9-eb26-4631-a2d1-a3dfc0c34101-kube-proxy\") pod \"kube-proxy-6nbs9\" (UID: \"a01642a9-eb26-4631-a2d1-a3dfc0c34101\") " pod="kube-system/kube-proxy-6nbs9" Sep 13 00:13:00.199918 kubelet[3252]: I0913 00:13:00.199800 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a01642a9-eb26-4631-a2d1-a3dfc0c34101-xtables-lock\") pod \"kube-proxy-6nbs9\" (UID: \"a01642a9-eb26-4631-a2d1-a3dfc0c34101\") " pod="kube-system/kube-proxy-6nbs9" Sep 13 00:13:00.199918 kubelet[3252]: I0913 00:13:00.199852 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a01642a9-eb26-4631-a2d1-a3dfc0c34101-lib-modules\") pod \"kube-proxy-6nbs9\" (UID: \"a01642a9-eb26-4631-a2d1-a3dfc0c34101\") " pod="kube-system/kube-proxy-6nbs9" Sep 13 00:13:00.199918 kubelet[3252]: I0913 00:13:00.199874 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4jlj\" (UniqueName: \"kubernetes.io/projected/a01642a9-eb26-4631-a2d1-a3dfc0c34101-kube-api-access-w4jlj\") pod \"kube-proxy-6nbs9\" (UID: \"a01642a9-eb26-4631-a2d1-a3dfc0c34101\") " pod="kube-system/kube-proxy-6nbs9" Sep 13 00:13:00.300617 kubelet[3252]: I0913 00:13:00.300448 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qz8\" (UniqueName: \"kubernetes.io/projected/5fa6eb7d-d032-45a9-a4c9-4b711af3c2a5-kube-api-access-z8qz8\") pod \"tigera-operator-58fc44c59b-g69hk\" (UID: \"5fa6eb7d-d032-45a9-a4c9-4b711af3c2a5\") " pod="tigera-operator/tigera-operator-58fc44c59b-g69hk" Sep 13 00:13:00.300617 kubelet[3252]: I0913 00:13:00.300567 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5fa6eb7d-d032-45a9-a4c9-4b711af3c2a5-var-lib-calico\") pod \"tigera-operator-58fc44c59b-g69hk\" (UID: \"5fa6eb7d-d032-45a9-a4c9-4b711af3c2a5\") " pod="tigera-operator/tigera-operator-58fc44c59b-g69hk" Sep 13 00:13:00.334295 containerd[1720]: time="2025-09-13T00:13:00.334247798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nbs9,Uid:a01642a9-eb26-4631-a2d1-a3dfc0c34101,Namespace:kube-system,Attempt:0,}" Sep 13 00:13:00.381684 containerd[1720]: time="2025-09-13T00:13:00.381578875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:00.381684 containerd[1720]: time="2025-09-13T00:13:00.381629976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:00.381972 containerd[1720]: time="2025-09-13T00:13:00.381666977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:00.381972 containerd[1720]: time="2025-09-13T00:13:00.381848080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:00.407687 systemd[1]: Started cri-containerd-80d69b0b329ea9079db71d2331ff8c0eab5c95786f4cf1c1efb87611c7ab2659.scope - libcontainer container 80d69b0b329ea9079db71d2331ff8c0eab5c95786f4cf1c1efb87611c7ab2659. Sep 13 00:13:00.438271 containerd[1720]: time="2025-09-13T00:13:00.438141404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nbs9,Uid:a01642a9-eb26-4631-a2d1-a3dfc0c34101,Namespace:kube-system,Attempt:0,} returns sandbox id \"80d69b0b329ea9079db71d2331ff8c0eab5c95786f4cf1c1efb87611c7ab2659\"" Sep 13 00:13:00.442217 containerd[1720]: time="2025-09-13T00:13:00.442183471Z" level=info msg="CreateContainer within sandbox \"80d69b0b329ea9079db71d2331ff8c0eab5c95786f4cf1c1efb87611c7ab2659\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:13:00.478500 containerd[1720]: time="2025-09-13T00:13:00.478419066Z" level=info msg="CreateContainer within sandbox \"80d69b0b329ea9079db71d2331ff8c0eab5c95786f4cf1c1efb87611c7ab2659\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6316df11985b73ca1045cc4800b29fde112b2f7cf81928bd2fdbdd0aa094ec5a\"" Sep 13 00:13:00.479165 containerd[1720]: time="2025-09-13T00:13:00.479097077Z" level=info msg="StartContainer for \"6316df11985b73ca1045cc4800b29fde112b2f7cf81928bd2fdbdd0aa094ec5a\"" Sep 13 00:13:00.482041 containerd[1720]: time="2025-09-13T00:13:00.481639819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-g69hk,Uid:5fa6eb7d-d032-45a9-a4c9-4b711af3c2a5,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:13:00.514192 systemd[1]: Started cri-containerd-6316df11985b73ca1045cc4800b29fde112b2f7cf81928bd2fdbdd0aa094ec5a.scope - libcontainer container 6316df11985b73ca1045cc4800b29fde112b2f7cf81928bd2fdbdd0aa094ec5a. Sep 13 00:13:00.553705 containerd[1720]: time="2025-09-13T00:13:00.553244895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:00.553705 containerd[1720]: time="2025-09-13T00:13:00.553310596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:00.553705 containerd[1720]: time="2025-09-13T00:13:00.553332696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:00.553705 containerd[1720]: time="2025-09-13T00:13:00.553431198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:00.554716 containerd[1720]: time="2025-09-13T00:13:00.554608617Z" level=info msg="StartContainer for \"6316df11985b73ca1045cc4800b29fde112b2f7cf81928bd2fdbdd0aa094ec5a\" returns successfully" Sep 13 00:13:00.583703 systemd[1]: Started cri-containerd-ac1383076ea7ce6e4e304cd9c54a6e4bfb83c3efb34294dfe1f69e3562bb2766.scope - libcontainer container ac1383076ea7ce6e4e304cd9c54a6e4bfb83c3efb34294dfe1f69e3562bb2766. Sep 13 00:13:00.636550 containerd[1720]: time="2025-09-13T00:13:00.636403961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-g69hk,Uid:5fa6eb7d-d032-45a9-a4c9-4b711af3c2a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ac1383076ea7ce6e4e304cd9c54a6e4bfb83c3efb34294dfe1f69e3562bb2766\"" Sep 13 00:13:00.641099 containerd[1720]: time="2025-09-13T00:13:00.640640331Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:13:01.253991 kubelet[3252]: I0913 00:13:01.253920 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6nbs9" podStartSLOduration=1.2538950039999999 podStartE2EDuration="1.253895004s" podCreationTimestamp="2025-09-13 00:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:13:01.253088191 +0000 UTC m=+6.155119709" watchObservedRunningTime="2025-09-13 00:13:01.253895004 +0000 UTC m=+6.155926422" Sep 13 00:13:02.421658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818931218.mount: Deactivated successfully. Sep 13 00:13:04.223490 containerd[1720]: time="2025-09-13T00:13:04.223364783Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:04.226247 containerd[1720]: time="2025-09-13T00:13:04.226195330Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:13:04.229376 containerd[1720]: time="2025-09-13T00:13:04.229324981Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:04.233718 containerd[1720]: time="2025-09-13T00:13:04.233687453Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:04.234799 containerd[1720]: time="2025-09-13T00:13:04.234316963Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.593632132s" Sep 13 00:13:04.234799 containerd[1720]: time="2025-09-13T00:13:04.234357464Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:13:04.238000 containerd[1720]: time="2025-09-13T00:13:04.237972323Z" level=info msg="CreateContainer within sandbox \"ac1383076ea7ce6e4e304cd9c54a6e4bfb83c3efb34294dfe1f69e3562bb2766\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:13:04.278184 containerd[1720]: time="2025-09-13T00:13:04.278144883Z" level=info msg="CreateContainer within sandbox \"ac1383076ea7ce6e4e304cd9c54a6e4bfb83c3efb34294dfe1f69e3562bb2766\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5c3968129eb7d3cc52c459c7bb0706dc524f95f70783acecf2b5b286ad584fe3\"" Sep 13 00:13:04.279131 containerd[1720]: time="2025-09-13T00:13:04.279096999Z" level=info msg="StartContainer for \"5c3968129eb7d3cc52c459c7bb0706dc524f95f70783acecf2b5b286ad584fe3\"" Sep 13 00:13:04.315654 systemd[1]: Started cri-containerd-5c3968129eb7d3cc52c459c7bb0706dc524f95f70783acecf2b5b286ad584fe3.scope - libcontainer container 5c3968129eb7d3cc52c459c7bb0706dc524f95f70783acecf2b5b286ad584fe3. Sep 13 00:13:04.345531 containerd[1720]: time="2025-09-13T00:13:04.345482989Z" level=info msg="StartContainer for \"5c3968129eb7d3cc52c459c7bb0706dc524f95f70783acecf2b5b286ad584fe3\" returns successfully" Sep 13 00:13:10.773625 sudo[2353]: pam_unix(sudo:session): session closed for user root Sep 13 00:13:10.876712 sshd[2350]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:10.880810 systemd-logind[1699]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:13:10.881500 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:41066.service: Deactivated successfully. Sep 13 00:13:10.887264 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:13:10.887713 systemd[1]: session-9.scope: Consumed 4.247s CPU time, 157.1M memory peak, 0B memory swap peak. Sep 13 00:13:10.890510 systemd-logind[1699]: Removed session 9. Sep 13 00:13:14.975338 kubelet[3252]: I0913 00:13:14.975267 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-g69hk" podStartSLOduration=11.379079642 podStartE2EDuration="14.975242416s" podCreationTimestamp="2025-09-13 00:13:00 +0000 UTC" firstStartedPulling="2025-09-13 00:13:00.639167706 +0000 UTC m=+5.541199124" lastFinishedPulling="2025-09-13 00:13:04.23533048 +0000 UTC m=+9.137361898" observedRunningTime="2025-09-13 00:13:05.270383082 +0000 UTC m=+10.172414600" watchObservedRunningTime="2025-09-13 00:13:14.975242416 +0000 UTC m=+19.877273834" Sep 13 00:13:14.989633 systemd[1]: Created slice kubepods-besteffort-pod91f05a0a_910c_4efd_a160_8cdd6338e4f2.slice - libcontainer container kubepods-besteffort-pod91f05a0a_910c_4efd_a160_8cdd6338e4f2.slice. Sep 13 00:13:15.096488 kubelet[3252]: I0913 00:13:15.094870 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91f05a0a-910c-4efd-a160-8cdd6338e4f2-typha-certs\") pod \"calico-typha-64fdf9957f-djfj7\" (UID: \"91f05a0a-910c-4efd-a160-8cdd6338e4f2\") " pod="calico-system/calico-typha-64fdf9957f-djfj7" Sep 13 00:13:15.096488 kubelet[3252]: I0913 00:13:15.094926 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91f05a0a-910c-4efd-a160-8cdd6338e4f2-tigera-ca-bundle\") pod \"calico-typha-64fdf9957f-djfj7\" (UID: \"91f05a0a-910c-4efd-a160-8cdd6338e4f2\") " pod="calico-system/calico-typha-64fdf9957f-djfj7" Sep 13 00:13:15.096488 kubelet[3252]: I0913 00:13:15.094954 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kh4v\" (UniqueName: \"kubernetes.io/projected/91f05a0a-910c-4efd-a160-8cdd6338e4f2-kube-api-access-9kh4v\") pod \"calico-typha-64fdf9957f-djfj7\" (UID: \"91f05a0a-910c-4efd-a160-8cdd6338e4f2\") " pod="calico-system/calico-typha-64fdf9957f-djfj7" Sep 13 00:13:15.300913 containerd[1720]: time="2025-09-13T00:13:15.300872934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fdf9957f-djfj7,Uid:91f05a0a-910c-4efd-a160-8cdd6338e4f2,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:15.356522 systemd[1]: Created slice kubepods-besteffort-podae6a2b56_1aef_4e9f_a469_e1d27e3ee8f0.slice - libcontainer container kubepods-besteffort-podae6a2b56_1aef_4e9f_a469_e1d27e3ee8f0.slice. Sep 13 00:13:15.365740 containerd[1720]: time="2025-09-13T00:13:15.365448849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:15.365740 containerd[1720]: time="2025-09-13T00:13:15.365538950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:15.365740 containerd[1720]: time="2025-09-13T00:13:15.365554150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:15.365962 containerd[1720]: time="2025-09-13T00:13:15.365659652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:15.403437 systemd[1]: Started cri-containerd-0909bad1641207c30ad153cc9882e0c7e6e9db2c8893769c63d44805e276f951.scope - libcontainer container 0909bad1641207c30ad153cc9882e0c7e6e9db2c8893769c63d44805e276f951. Sep 13 00:13:15.476246 containerd[1720]: time="2025-09-13T00:13:15.476206160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fdf9957f-djfj7,Uid:91f05a0a-910c-4efd-a160-8cdd6338e4f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"0909bad1641207c30ad153cc9882e0c7e6e9db2c8893769c63d44805e276f951\"" Sep 13 00:13:15.479806 containerd[1720]: time="2025-09-13T00:13:15.479771621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:13:15.497317 kubelet[3252]: I0913 00:13:15.496908 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-cni-log-dir\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497317 kubelet[3252]: I0913 00:13:15.496957 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-tigera-ca-bundle\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497317 kubelet[3252]: I0913 00:13:15.496982 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-cni-net-dir\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497317 kubelet[3252]: I0913 00:13:15.497004 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-var-lib-calico\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497317 kubelet[3252]: I0913 00:13:15.497027 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-xtables-lock\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497668 kubelet[3252]: I0913 00:13:15.497052 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-cni-bin-dir\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497668 kubelet[3252]: I0913 00:13:15.497077 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-node-certs\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497668 kubelet[3252]: I0913 00:13:15.497099 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64bn\" (UniqueName: \"kubernetes.io/projected/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-kube-api-access-b64bn\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497668 kubelet[3252]: I0913 00:13:15.497122 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-flexvol-driver-host\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497668 kubelet[3252]: I0913 00:13:15.497143 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-lib-modules\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497865 kubelet[3252]: I0913 00:13:15.497163 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-var-run-calico\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.497865 kubelet[3252]: I0913 00:13:15.497190 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0-policysync\") pod \"calico-node-hwdj7\" (UID: \"ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0\") " pod="calico-system/calico-node-hwdj7" Sep 13 00:13:15.605819 kubelet[3252]: E0913 00:13:15.605406 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.605819 kubelet[3252]: W0913 00:13:15.605447 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.605819 kubelet[3252]: E0913 00:13:15.605566 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.608335 kubelet[3252]: E0913 00:13:15.607400 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.608335 kubelet[3252]: W0913 00:13:15.607848 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.608335 kubelet[3252]: E0913 00:13:15.607879 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.608335 kubelet[3252]: E0913 00:13:15.608295 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.608335 kubelet[3252]: W0913 00:13:15.608307 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.609867 kubelet[3252]: E0913 00:13:15.609321 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.609867 kubelet[3252]: W0913 00:13:15.609361 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.609867 kubelet[3252]: E0913 00:13:15.609695 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.609867 kubelet[3252]: E0913 00:13:15.609729 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.609867 kubelet[3252]: E0913 00:13:15.609810 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.609867 kubelet[3252]: W0913 00:13:15.609820 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.609867 kubelet[3252]: E0913 00:13:15.609833 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.610999 kubelet[3252]: E0913 00:13:15.610650 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.610999 kubelet[3252]: W0913 00:13:15.610667 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.610999 kubelet[3252]: E0913 00:13:15.610699 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.610999 kubelet[3252]: E0913 00:13:15.610965 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.610999 kubelet[3252]: W0913 00:13:15.610976 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.611491 kubelet[3252]: E0913 00:13:15.611322 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.611799 kubelet[3252]: E0913 00:13:15.611696 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.611799 kubelet[3252]: W0913 00:13:15.611709 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.611799 kubelet[3252]: E0913 00:13:15.611744 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.612210 kubelet[3252]: E0913 00:13:15.612154 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.612210 kubelet[3252]: W0913 00:13:15.612166 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.612210 kubelet[3252]: E0913 00:13:15.612190 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.619481 kubelet[3252]: E0913 00:13:15.619354 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.619481 kubelet[3252]: W0913 00:13:15.619391 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.619481 kubelet[3252]: E0913 00:13:15.619413 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.665511 containerd[1720]: time="2025-09-13T00:13:15.664866015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hwdj7,Uid:ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:15.691786 kubelet[3252]: E0913 00:13:15.690958 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:15.722827 kubelet[3252]: E0913 00:13:15.722501 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.722827 kubelet[3252]: W0913 00:13:15.722541 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.722827 kubelet[3252]: E0913 00:13:15.722573 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.724760 kubelet[3252]: E0913 00:13:15.724726 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.724760 kubelet[3252]: W0913 00:13:15.724749 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.724969 kubelet[3252]: E0913 00:13:15.724777 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.725035 kubelet[3252]: E0913 00:13:15.725003 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.725035 kubelet[3252]: W0913 00:13:15.725015 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.725035 kubelet[3252]: E0913 00:13:15.725031 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725244 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.726309 kubelet[3252]: W0913 00:13:15.725255 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725269 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725503 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.726309 kubelet[3252]: W0913 00:13:15.725514 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725527 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725721 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.726309 kubelet[3252]: W0913 00:13:15.725731 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725743 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.726309 kubelet[3252]: E0913 00:13:15.725928 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.726850 kubelet[3252]: W0913 00:13:15.725937 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.726850 kubelet[3252]: E0913 00:13:15.725949 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.726850 kubelet[3252]: E0913 00:13:15.726138 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.726850 kubelet[3252]: W0913 00:13:15.726149 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.726850 kubelet[3252]: E0913 00:13:15.726163 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.726850 kubelet[3252]: E0913 00:13:15.726448 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.726850 kubelet[3252]: W0913 00:13:15.726460 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.726850 kubelet[3252]: E0913 00:13:15.726609 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.727728 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.728738 kubelet[3252]: W0913 00:13:15.727742 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.727758 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.727964 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.728738 kubelet[3252]: W0913 00:13:15.727976 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.727990 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.728197 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.728738 kubelet[3252]: W0913 00:13:15.728208 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.728222 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.728738 kubelet[3252]: E0913 00:13:15.728438 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.729917 kubelet[3252]: W0913 00:13:15.728450 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.729917 kubelet[3252]: E0913 00:13:15.729398 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.729917 kubelet[3252]: E0913 00:13:15.729881 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.729917 kubelet[3252]: W0913 00:13:15.729894 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.729917 kubelet[3252]: E0913 00:13:15.729908 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.730417 kubelet[3252]: E0913 00:13:15.730373 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.730417 kubelet[3252]: W0913 00:13:15.730389 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.730417 kubelet[3252]: E0913 00:13:15.730403 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.730961 kubelet[3252]: E0913 00:13:15.730887 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.730961 kubelet[3252]: W0913 00:13:15.730903 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.730961 kubelet[3252]: E0913 00:13:15.730916 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.731594 kubelet[3252]: E0913 00:13:15.731421 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.731594 kubelet[3252]: W0913 00:13:15.731434 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.731594 kubelet[3252]: E0913 00:13:15.731448 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.732212 kubelet[3252]: E0913 00:13:15.732080 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.732212 kubelet[3252]: W0913 00:13:15.732096 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.732212 kubelet[3252]: E0913 00:13:15.732111 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.733218 kubelet[3252]: E0913 00:13:15.732514 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.733218 kubelet[3252]: W0913 00:13:15.732531 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.733218 kubelet[3252]: E0913 00:13:15.732545 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.733218 kubelet[3252]: E0913 00:13:15.732795 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.733218 kubelet[3252]: W0913 00:13:15.732808 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.733218 kubelet[3252]: E0913 00:13:15.732821 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.735706 containerd[1720]: time="2025-09-13T00:13:15.734844522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:15.735706 containerd[1720]: time="2025-09-13T00:13:15.735141627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:15.735706 containerd[1720]: time="2025-09-13T00:13:15.735176028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:15.736332 containerd[1720]: time="2025-09-13T00:13:15.736158445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:15.768744 systemd[1]: Started cri-containerd-17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0.scope - libcontainer container 17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0. Sep 13 00:13:15.801002 kubelet[3252]: E0913 00:13:15.800954 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.801264 kubelet[3252]: W0913 00:13:15.801207 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.801406 kubelet[3252]: E0913 00:13:15.801379 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.801679 kubelet[3252]: I0913 00:13:15.801650 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmzbg\" (UniqueName: \"kubernetes.io/projected/f346ff52-85ec-4854-bcc7-81887dda8d38-kube-api-access-zmzbg\") pod \"csi-node-driver-l9bxj\" (UID: \"f346ff52-85ec-4854-bcc7-81887dda8d38\") " pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:15.803216 kubelet[3252]: E0913 00:13:15.803183 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.803374 kubelet[3252]: W0913 00:13:15.803336 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.803511 kubelet[3252]: E0913 00:13:15.803496 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.804083 kubelet[3252]: E0913 00:13:15.804068 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.804191 kubelet[3252]: W0913 00:13:15.804177 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.804314 kubelet[3252]: E0913 00:13:15.804300 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.804978 kubelet[3252]: E0913 00:13:15.804953 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.805338 kubelet[3252]: W0913 00:13:15.805163 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.805338 kubelet[3252]: E0913 00:13:15.805186 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.805338 kubelet[3252]: I0913 00:13:15.805219 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f346ff52-85ec-4854-bcc7-81887dda8d38-kubelet-dir\") pod \"csi-node-driver-l9bxj\" (UID: \"f346ff52-85ec-4854-bcc7-81887dda8d38\") " pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:15.805889 kubelet[3252]: E0913 00:13:15.805811 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.805889 kubelet[3252]: W0913 00:13:15.805855 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.806354 kubelet[3252]: E0913 00:13:15.806186 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.807496 kubelet[3252]: I0913 00:13:15.806221 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f346ff52-85ec-4854-bcc7-81887dda8d38-socket-dir\") pod \"csi-node-driver-l9bxj\" (UID: \"f346ff52-85ec-4854-bcc7-81887dda8d38\") " pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:15.807496 kubelet[3252]: E0913 00:13:15.807331 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.807496 kubelet[3252]: W0913 00:13:15.807341 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.807496 kubelet[3252]: E0913 00:13:15.807374 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.807838 kubelet[3252]: E0913 00:13:15.807726 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.807838 kubelet[3252]: W0913 00:13:15.807737 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.807838 kubelet[3252]: E0913 00:13:15.807751 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.808070 kubelet[3252]: E0913 00:13:15.807965 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.808070 kubelet[3252]: W0913 00:13:15.807973 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.808070 kubelet[3252]: E0913 00:13:15.807988 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.808894 kubelet[3252]: I0913 00:13:15.808604 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f346ff52-85ec-4854-bcc7-81887dda8d38-varrun\") pod \"csi-node-driver-l9bxj\" (UID: \"f346ff52-85ec-4854-bcc7-81887dda8d38\") " pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:15.808894 kubelet[3252]: E0913 00:13:15.808666 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.808894 kubelet[3252]: W0913 00:13:15.808676 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.808894 kubelet[3252]: E0913 00:13:15.808692 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.809189 kubelet[3252]: E0913 00:13:15.809069 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.809189 kubelet[3252]: W0913 00:13:15.809078 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.809576 kubelet[3252]: E0913 00:13:15.809315 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.809938 kubelet[3252]: E0913 00:13:15.809736 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.809938 kubelet[3252]: W0913 00:13:15.809747 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.809938 kubelet[3252]: E0913 00:13:15.809769 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.810148 kubelet[3252]: E0913 00:13:15.809987 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.810148 kubelet[3252]: W0913 00:13:15.809999 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.810148 kubelet[3252]: E0913 00:13:15.810013 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.810264 kubelet[3252]: E0913 00:13:15.810233 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.810264 kubelet[3252]: W0913 00:13:15.810246 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.810264 kubelet[3252]: E0913 00:13:15.810259 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.811505 kubelet[3252]: I0913 00:13:15.810650 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f346ff52-85ec-4854-bcc7-81887dda8d38-registration-dir\") pod \"csi-node-driver-l9bxj\" (UID: \"f346ff52-85ec-4854-bcc7-81887dda8d38\") " pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:15.811505 kubelet[3252]: E0913 00:13:15.810727 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.811505 kubelet[3252]: W0913 00:13:15.810738 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.811505 kubelet[3252]: E0913 00:13:15.810751 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.811778 kubelet[3252]: E0913 00:13:15.811647 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.811778 kubelet[3252]: W0913 00:13:15.811660 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.811778 kubelet[3252]: E0913 00:13:15.811675 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.857082 containerd[1720]: time="2025-09-13T00:13:15.856758526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hwdj7,Uid:ae6a2b56-1aef-4e9f-a469-e1d27e3ee8f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\"" Sep 13 00:13:15.912401 kubelet[3252]: E0913 00:13:15.912337 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.912401 kubelet[3252]: W0913 00:13:15.912371 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.912703 kubelet[3252]: E0913 00:13:15.912414 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.912826 kubelet[3252]: E0913 00:13:15.912796 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.912882 kubelet[3252]: W0913 00:13:15.912834 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.912882 kubelet[3252]: E0913 00:13:15.912866 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.913162 kubelet[3252]: E0913 00:13:15.913139 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.913162 kubelet[3252]: W0913 00:13:15.913153 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.913281 kubelet[3252]: E0913 00:13:15.913180 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.913480 kubelet[3252]: E0913 00:13:15.913453 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.913480 kubelet[3252]: W0913 00:13:15.913479 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.913596 kubelet[3252]: E0913 00:13:15.913505 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.913976 kubelet[3252]: E0913 00:13:15.913947 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.913976 kubelet[3252]: W0913 00:13:15.913966 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.914183 kubelet[3252]: E0913 00:13:15.914157 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.914567 kubelet[3252]: E0913 00:13:15.914550 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.914567 kubelet[3252]: W0913 00:13:15.914565 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.914855 kubelet[3252]: E0913 00:13:15.914832 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.915114 kubelet[3252]: E0913 00:13:15.915086 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.915114 kubelet[3252]: W0913 00:13:15.915103 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.915531 kubelet[3252]: E0913 00:13:15.915280 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.915716 kubelet[3252]: E0913 00:13:15.915696 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.915772 kubelet[3252]: W0913 00:13:15.915715 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.915821 kubelet[3252]: E0913 00:13:15.915783 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.917304 kubelet[3252]: E0913 00:13:15.917267 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.917304 kubelet[3252]: W0913 00:13:15.917289 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.917482 kubelet[3252]: E0913 00:13:15.917375 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.917792 kubelet[3252]: E0913 00:13:15.917596 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.917792 kubelet[3252]: W0913 00:13:15.917610 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.917792 kubelet[3252]: E0913 00:13:15.917697 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.918560 kubelet[3252]: E0913 00:13:15.918533 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.918560 kubelet[3252]: W0913 00:13:15.918552 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.918650 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.918805 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.919420 kubelet[3252]: W0913 00:13:15.918816 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.918898 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.919055 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.919420 kubelet[3252]: W0913 00:13:15.919066 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.919149 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.919305 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.919420 kubelet[3252]: W0913 00:13:15.919317 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.919420 kubelet[3252]: E0913 00:13:15.919403 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.919940 kubelet[3252]: E0913 00:13:15.919585 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.919940 kubelet[3252]: W0913 00:13:15.919598 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.920576 kubelet[3252]: E0913 00:13:15.920549 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.920886 kubelet[3252]: E0913 00:13:15.920747 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.920886 kubelet[3252]: W0913 00:13:15.920778 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.920886 kubelet[3252]: E0913 00:13:15.920853 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.921278 kubelet[3252]: E0913 00:13:15.921262 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.921345 kubelet[3252]: W0913 00:13:15.921290 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.921589 kubelet[3252]: E0913 00:13:15.921404 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.921589 kubelet[3252]: E0913 00:13:15.921583 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.921722 kubelet[3252]: W0913 00:13:15.921594 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.921722 kubelet[3252]: E0913 00:13:15.921700 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.921865 kubelet[3252]: E0913 00:13:15.921839 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.921865 kubelet[3252]: W0913 00:13:15.921858 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.921959 kubelet[3252]: E0913 00:13:15.921942 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.922153 kubelet[3252]: E0913 00:13:15.922135 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.922153 kubelet[3252]: W0913 00:13:15.922149 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.922260 kubelet[3252]: E0913 00:13:15.922233 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.922657 kubelet[3252]: E0913 00:13:15.922640 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.922657 kubelet[3252]: W0913 00:13:15.922654 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.922872 kubelet[3252]: E0913 00:13:15.922807 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.922941 kubelet[3252]: E0913 00:13:15.922907 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.922941 kubelet[3252]: W0913 00:13:15.922917 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.923125 kubelet[3252]: E0913 00:13:15.923060 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.923189 kubelet[3252]: E0913 00:13:15.923159 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.923189 kubelet[3252]: W0913 00:13:15.923170 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.923365 kubelet[3252]: E0913 00:13:15.923307 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.924487 kubelet[3252]: E0913 00:13:15.923663 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.924487 kubelet[3252]: W0913 00:13:15.923681 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.924487 kubelet[3252]: E0913 00:13:15.923809 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.924487 kubelet[3252]: E0913 00:13:15.923966 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.924487 kubelet[3252]: W0913 00:13:15.923977 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.924487 kubelet[3252]: E0913 00:13:15.923990 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:15.933495 kubelet[3252]: E0913 00:13:15.933448 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:15.933495 kubelet[3252]: W0913 00:13:15.933477 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:15.933640 kubelet[3252]: E0913 00:13:15.933507 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:16.781433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71180994.mount: Deactivated successfully. Sep 13 00:13:17.193111 kubelet[3252]: E0913 00:13:17.192965 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:17.828273 containerd[1720]: time="2025-09-13T00:13:17.828214841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:17.831815 containerd[1720]: time="2025-09-13T00:13:17.831667101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:13:17.835608 containerd[1720]: time="2025-09-13T00:13:17.834546251Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:17.839777 containerd[1720]: time="2025-09-13T00:13:17.839740340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:17.841180 containerd[1720]: time="2025-09-13T00:13:17.840717857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.360902135s" Sep 13 00:13:17.841180 containerd[1720]: time="2025-09-13T00:13:17.840996162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:13:17.848064 containerd[1720]: time="2025-09-13T00:13:17.846871863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:13:17.864982 containerd[1720]: time="2025-09-13T00:13:17.864938375Z" level=info msg="CreateContainer within sandbox \"0909bad1641207c30ad153cc9882e0c7e6e9db2c8893769c63d44805e276f951\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:13:17.917479 containerd[1720]: time="2025-09-13T00:13:17.917412280Z" level=info msg="CreateContainer within sandbox \"0909bad1641207c30ad153cc9882e0c7e6e9db2c8893769c63d44805e276f951\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"432d7ff9f39ef0b608f48d5be50e88ee1c300955ab2909690be01ad2d1ff194d\"" Sep 13 00:13:17.918777 containerd[1720]: time="2025-09-13T00:13:17.918301296Z" level=info msg="StartContainer for \"432d7ff9f39ef0b608f48d5be50e88ee1c300955ab2909690be01ad2d1ff194d\"" Sep 13 00:13:17.956703 systemd[1]: Started cri-containerd-432d7ff9f39ef0b608f48d5be50e88ee1c300955ab2909690be01ad2d1ff194d.scope - libcontainer container 432d7ff9f39ef0b608f48d5be50e88ee1c300955ab2909690be01ad2d1ff194d. Sep 13 00:13:18.014731 containerd[1720]: time="2025-09-13T00:13:18.014683259Z" level=info msg="StartContainer for \"432d7ff9f39ef0b608f48d5be50e88ee1c300955ab2909690be01ad2d1ff194d\" returns successfully" Sep 13 00:13:18.319210 kubelet[3252]: I0913 00:13:18.319137 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64fdf9957f-djfj7" podStartSLOduration=1.9529468859999999 podStartE2EDuration="4.319116312s" podCreationTimestamp="2025-09-13 00:13:14 +0000 UTC" firstStartedPulling="2025-09-13 00:13:15.479391615 +0000 UTC m=+20.381423033" lastFinishedPulling="2025-09-13 00:13:17.845561041 +0000 UTC m=+22.747592459" observedRunningTime="2025-09-13 00:13:18.318968109 +0000 UTC m=+23.220999527" watchObservedRunningTime="2025-09-13 00:13:18.319116312 +0000 UTC m=+23.221147730" Sep 13 00:13:18.353563 kubelet[3252]: E0913 00:13:18.353523 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.353563 kubelet[3252]: W0913 00:13:18.353560 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.353800 kubelet[3252]: E0913 00:13:18.353589 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.353914 kubelet[3252]: E0913 00:13:18.353894 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.354002 kubelet[3252]: W0913 00:13:18.353914 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.354002 kubelet[3252]: E0913 00:13:18.353930 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.354159 kubelet[3252]: E0913 00:13:18.354141 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.354230 kubelet[3252]: W0913 00:13:18.354160 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.354230 kubelet[3252]: E0913 00:13:18.354174 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.354414 kubelet[3252]: E0913 00:13:18.354387 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.354414 kubelet[3252]: W0913 00:13:18.354407 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.354541 kubelet[3252]: E0913 00:13:18.354423 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.354806 kubelet[3252]: E0913 00:13:18.354695 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.354806 kubelet[3252]: W0913 00:13:18.354715 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.354806 kubelet[3252]: E0913 00:13:18.354729 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.356049 kubelet[3252]: E0913 00:13:18.355206 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.356049 kubelet[3252]: W0913 00:13:18.355219 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.356049 kubelet[3252]: E0913 00:13:18.355234 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.356049 kubelet[3252]: E0913 00:13:18.355419 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.356049 kubelet[3252]: W0913 00:13:18.355429 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.356049 kubelet[3252]: E0913 00:13:18.355441 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.356560 kubelet[3252]: E0913 00:13:18.356533 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.356560 kubelet[3252]: W0913 00:13:18.356553 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.356691 kubelet[3252]: E0913 00:13:18.356569 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.356808 kubelet[3252]: E0913 00:13:18.356791 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.356879 kubelet[3252]: W0913 00:13:18.356809 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.356879 kubelet[3252]: E0913 00:13:18.356823 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.357043 kubelet[3252]: E0913 00:13:18.357026 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.357108 kubelet[3252]: W0913 00:13:18.357043 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.357108 kubelet[3252]: E0913 00:13:18.357057 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.357277 kubelet[3252]: E0913 00:13:18.357260 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.357340 kubelet[3252]: W0913 00:13:18.357280 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.357340 kubelet[3252]: E0913 00:13:18.357294 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.357598 kubelet[3252]: E0913 00:13:18.357579 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.357598 kubelet[3252]: W0913 00:13:18.357597 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.357710 kubelet[3252]: E0913 00:13:18.357611 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.357906 kubelet[3252]: E0913 00:13:18.357889 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.357906 kubelet[3252]: W0913 00:13:18.357906 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.358038 kubelet[3252]: E0913 00:13:18.357921 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.358147 kubelet[3252]: E0913 00:13:18.358131 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.358228 kubelet[3252]: W0913 00:13:18.358148 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.358228 kubelet[3252]: E0913 00:13:18.358161 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.358808 kubelet[3252]: E0913 00:13:18.358543 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.358808 kubelet[3252]: W0913 00:13:18.358558 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.358808 kubelet[3252]: E0913 00:13:18.358572 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.431592 kubelet[3252]: E0913 00:13:18.431383 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.431592 kubelet[3252]: W0913 00:13:18.431415 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.431592 kubelet[3252]: E0913 00:13:18.431443 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.432494 kubelet[3252]: E0913 00:13:18.432137 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.432494 kubelet[3252]: W0913 00:13:18.432158 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.432494 kubelet[3252]: E0913 00:13:18.432197 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.433064 kubelet[3252]: E0913 00:13:18.432862 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.433064 kubelet[3252]: W0913 00:13:18.432880 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.433064 kubelet[3252]: E0913 00:13:18.432924 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.433555 kubelet[3252]: E0913 00:13:18.433405 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.433555 kubelet[3252]: W0913 00:13:18.433424 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.433555 kubelet[3252]: E0913 00:13:18.433442 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.434592 kubelet[3252]: E0913 00:13:18.433939 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.434592 kubelet[3252]: W0913 00:13:18.433955 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.434592 kubelet[3252]: E0913 00:13:18.434104 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.434978 kubelet[3252]: E0913 00:13:18.434815 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.434978 kubelet[3252]: W0913 00:13:18.434830 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.434978 kubelet[3252]: E0913 00:13:18.434945 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.435438 kubelet[3252]: E0913 00:13:18.435271 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.435438 kubelet[3252]: W0913 00:13:18.435285 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.435438 kubelet[3252]: E0913 00:13:18.435372 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.435984 kubelet[3252]: E0913 00:13:18.435786 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.435984 kubelet[3252]: W0913 00:13:18.435816 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.435984 kubelet[3252]: E0913 00:13:18.435909 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.436518 kubelet[3252]: E0913 00:13:18.436320 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.436518 kubelet[3252]: W0913 00:13:18.436335 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.436518 kubelet[3252]: E0913 00:13:18.436367 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.436856 kubelet[3252]: E0913 00:13:18.436738 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.436856 kubelet[3252]: W0913 00:13:18.436752 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.437033 kubelet[3252]: E0913 00:13:18.436989 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.437248 kubelet[3252]: E0913 00:13:18.437233 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.437438 kubelet[3252]: W0913 00:13:18.437332 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.437653 kubelet[3252]: E0913 00:13:18.437559 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.437864 kubelet[3252]: E0913 00:13:18.437852 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.438019 kubelet[3252]: W0913 00:13:18.437934 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.438098 kubelet[3252]: E0913 00:13:18.438084 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.438889 kubelet[3252]: E0913 00:13:18.438627 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.438889 kubelet[3252]: W0913 00:13:18.438641 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.438889 kubelet[3252]: E0913 00:13:18.438659 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.439203 kubelet[3252]: E0913 00:13:18.439190 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.439281 kubelet[3252]: W0913 00:13:18.439270 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.439418 kubelet[3252]: E0913 00:13:18.439405 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.439755 kubelet[3252]: E0913 00:13:18.439661 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.439755 kubelet[3252]: W0913 00:13:18.439673 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.439909 kubelet[3252]: E0913 00:13:18.439879 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.440113 kubelet[3252]: E0913 00:13:18.440084 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.440113 kubelet[3252]: W0913 00:13:18.440097 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.440341 kubelet[3252]: E0913 00:13:18.440239 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.441127 kubelet[3252]: E0913 00:13:18.440640 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.441127 kubelet[3252]: W0913 00:13:18.440655 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.441127 kubelet[3252]: E0913 00:13:18.440673 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:18.441480 kubelet[3252]: E0913 00:13:18.441451 3252 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:13:18.441577 kubelet[3252]: W0913 00:13:18.441564 3252 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:13:18.441655 kubelet[3252]: E0913 00:13:18.441643 3252 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:13:19.132782 containerd[1720]: time="2025-09-13T00:13:19.132733450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:19.135971 containerd[1720]: time="2025-09-13T00:13:19.135902504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:13:19.140003 containerd[1720]: time="2025-09-13T00:13:19.139886773Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:19.148014 containerd[1720]: time="2025-09-13T00:13:19.147549105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:19.149783 containerd[1720]: time="2025-09-13T00:13:19.149743743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.302784678s" Sep 13 00:13:19.150160 containerd[1720]: time="2025-09-13T00:13:19.150042848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:13:19.154021 containerd[1720]: time="2025-09-13T00:13:19.153986016Z" level=info msg="CreateContainer within sandbox \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:13:19.193224 kubelet[3252]: E0913 00:13:19.192932 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:19.204350 containerd[1720]: time="2025-09-13T00:13:19.204298985Z" level=info msg="CreateContainer within sandbox \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9\"" Sep 13 00:13:19.205115 containerd[1720]: time="2025-09-13T00:13:19.204774993Z" level=info msg="StartContainer for \"b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9\"" Sep 13 00:13:19.240654 systemd[1]: run-containerd-runc-k8s.io-b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9-runc.lMfI8X.mount: Deactivated successfully. Sep 13 00:13:19.252632 systemd[1]: Started cri-containerd-b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9.scope - libcontainer container b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9. Sep 13 00:13:19.285079 containerd[1720]: time="2025-09-13T00:13:19.284910175Z" level=info msg="StartContainer for \"b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9\" returns successfully" Sep 13 00:13:19.297438 kubelet[3252]: I0913 00:13:19.297152 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:13:19.298241 systemd[1]: cri-containerd-b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9.scope: Deactivated successfully. Sep 13 00:13:19.852343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9-rootfs.mount: Deactivated successfully. Sep 13 00:13:20.812802 containerd[1720]: time="2025-09-13T00:13:20.812547534Z" level=info msg="shim disconnected" id=b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9 namespace=k8s.io Sep 13 00:13:20.814506 containerd[1720]: time="2025-09-13T00:13:20.813311047Z" level=warning msg="cleaning up after shim disconnected" id=b2b5dc09bf9fb767f9c6ea7843737127a782d58c6fcf58d8d6400a751b9923a9 namespace=k8s.io Sep 13 00:13:20.814506 containerd[1720]: time="2025-09-13T00:13:20.813570551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:13:21.194603 kubelet[3252]: E0913 00:13:21.194438 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:21.304030 containerd[1720]: time="2025-09-13T00:13:21.303539105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:13:23.193736 kubelet[3252]: E0913 00:13:23.192952 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:24.516851 containerd[1720]: time="2025-09-13T00:13:24.516768767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:24.522380 containerd[1720]: time="2025-09-13T00:13:24.522319064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:13:24.526498 containerd[1720]: time="2025-09-13T00:13:24.525200315Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:24.529804 containerd[1720]: time="2025-09-13T00:13:24.529759495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:24.530551 containerd[1720]: time="2025-09-13T00:13:24.530516709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.226928702s" Sep 13 00:13:24.530636 containerd[1720]: time="2025-09-13T00:13:24.530556909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:13:24.533681 containerd[1720]: time="2025-09-13T00:13:24.533654864Z" level=info msg="CreateContainer within sandbox \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:13:24.570108 containerd[1720]: time="2025-09-13T00:13:24.570051205Z" level=info msg="CreateContainer within sandbox \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104\"" Sep 13 00:13:24.571053 containerd[1720]: time="2025-09-13T00:13:24.571012421Z" level=info msg="StartContainer for \"498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104\"" Sep 13 00:13:24.608103 systemd[1]: Started cri-containerd-498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104.scope - libcontainer container 498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104. Sep 13 00:13:24.646993 containerd[1720]: time="2025-09-13T00:13:24.646943958Z" level=info msg="StartContainer for \"498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104\" returns successfully" Sep 13 00:13:25.193709 kubelet[3252]: E0913 00:13:25.192681 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:26.285036 containerd[1720]: time="2025-09-13T00:13:26.284978793Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:13:26.288296 systemd[1]: cri-containerd-498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104.scope: Deactivated successfully. Sep 13 00:13:26.311965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104-rootfs.mount: Deactivated successfully. Sep 13 00:13:26.365326 kubelet[3252]: I0913 00:13:26.365293 3252 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:13:26.907958 kubelet[3252]: W0913 00:13:26.440720 3252 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.5-n-78cb87e672" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-n-78cb87e672' and this object Sep 13 00:13:26.907958 kubelet[3252]: E0913 00:13:26.440770 3252 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.5-n-78cb87e672\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-n-78cb87e672' and this object" logger="UnhandledError" Sep 13 00:13:26.907958 kubelet[3252]: W0913 00:13:26.442527 3252 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4081.3.5-n-78cb87e672" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-78cb87e672' and this object Sep 13 00:13:26.907958 kubelet[3252]: E0913 00:13:26.442562 3252 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4081.3.5-n-78cb87e672\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-78cb87e672' and this object" logger="UnhandledError" Sep 13 00:13:26.907958 kubelet[3252]: W0913 00:13:26.445227 3252 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.5-n-78cb87e672" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.5-n-78cb87e672' and this object Sep 13 00:13:26.416929 systemd[1]: Created slice kubepods-burstable-podcf31218c_978b_4717_b2c0_67e187ff41da.slice - libcontainer container kubepods-burstable-podcf31218c_978b_4717_b2c0_67e187ff41da.slice. Sep 13 00:13:26.908397 kubelet[3252]: E0913 00:13:26.445262 3252 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.5-n-78cb87e672\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.5-n-78cb87e672' and this object" logger="UnhandledError" Sep 13 00:13:26.908397 kubelet[3252]: I0913 00:13:26.493552 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpfhs\" (UniqueName: \"kubernetes.io/projected/cf31218c-978b-4717-b2c0-67e187ff41da-kube-api-access-vpfhs\") pod \"coredns-7c65d6cfc9-n4vhw\" (UID: \"cf31218c-978b-4717-b2c0-67e187ff41da\") " pod="kube-system/coredns-7c65d6cfc9-n4vhw" Sep 13 00:13:26.908397 kubelet[3252]: I0913 00:13:26.493606 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a930d21c-bb14-4796-bdaa-457b525327fa-config-volume\") pod \"coredns-7c65d6cfc9-pn2sm\" (UID: \"a930d21c-bb14-4796-bdaa-457b525327fa\") " pod="kube-system/coredns-7c65d6cfc9-pn2sm" Sep 13 00:13:26.908397 kubelet[3252]: I0913 00:13:26.493636 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh4rj\" (UniqueName: \"kubernetes.io/projected/a930d21c-bb14-4796-bdaa-457b525327fa-kube-api-access-gh4rj\") pod \"coredns-7c65d6cfc9-pn2sm\" (UID: \"a930d21c-bb14-4796-bdaa-457b525327fa\") " pod="kube-system/coredns-7c65d6cfc9-pn2sm" Sep 13 00:13:26.908397 kubelet[3252]: I0913 00:13:26.493663 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf31218c-978b-4717-b2c0-67e187ff41da-config-volume\") pod \"coredns-7c65d6cfc9-n4vhw\" (UID: \"cf31218c-978b-4717-b2c0-67e187ff41da\") " pod="kube-system/coredns-7c65d6cfc9-n4vhw" Sep 13 00:13:26.441979 systemd[1]: Created slice kubepods-burstable-poda930d21c_bb14_4796_bdaa_457b525327fa.slice - libcontainer container kubepods-burstable-poda930d21c_bb14_4796_bdaa_457b525327fa.slice. Sep 13 00:13:26.908729 kubelet[3252]: I0913 00:13:26.594350 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-ca-bundle\") pod \"whisker-7d4cc886d7-krggt\" (UID: \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\") " pod="calico-system/whisker-7d4cc886d7-krggt" Sep 13 00:13:26.908729 kubelet[3252]: I0913 00:13:26.594403 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5cc334d-ce94-427b-92b7-c5a6c28738bf-calico-apiserver-certs\") pod \"calico-apiserver-fc4d7f994-ts7vf\" (UID: \"c5cc334d-ce94-427b-92b7-c5a6c28738bf\") " pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" Sep 13 00:13:26.908729 kubelet[3252]: I0913 00:13:26.594421 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5dc69410-9fe3-44c7-9677-e9d7c8975f57-calico-apiserver-certs\") pod \"calico-apiserver-5b854d5954-dnf7s\" (UID: \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\") " pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" Sep 13 00:13:26.908729 kubelet[3252]: I0913 00:13:26.594446 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e3ad624e-065e-4802-ad9c-31cc5acb09b3-goldmane-key-pair\") pod \"goldmane-7988f88666-8dh9r\" (UID: \"e3ad624e-065e-4802-ad9c-31cc5acb09b3\") " pod="calico-system/goldmane-7988f88666-8dh9r" Sep 13 00:13:26.908729 kubelet[3252]: I0913 00:13:26.594488 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvcs\" (UniqueName: \"kubernetes.io/projected/878e2756-bf9e-485f-b59b-c9371e461113-kube-api-access-gjvcs\") pod \"calico-kube-controllers-6c967b48b-ks4c2\" (UID: \"878e2756-bf9e-485f-b59b-c9371e461113\") " pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" Sep 13 00:13:26.453345 systemd[1]: Created slice kubepods-besteffort-pod878e2756_bf9e_485f_b59b_c9371e461113.slice - libcontainer container kubepods-besteffort-pod878e2756_bf9e_485f_b59b_c9371e461113.slice. Sep 13 00:13:26.909010 kubelet[3252]: I0913 00:13:26.594513 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-backend-key-pair\") pod \"whisker-7d4cc886d7-krggt\" (UID: \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\") " pod="calico-system/whisker-7d4cc886d7-krggt" Sep 13 00:13:26.909010 kubelet[3252]: I0913 00:13:26.594549 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k6b8\" (UniqueName: \"kubernetes.io/projected/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-kube-api-access-5k6b8\") pod \"calico-apiserver-5b854d5954-qtqrp\" (UID: \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\") " pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" Sep 13 00:13:26.909010 kubelet[3252]: I0913 00:13:26.594574 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3ad624e-065e-4802-ad9c-31cc5acb09b3-config\") pod \"goldmane-7988f88666-8dh9r\" (UID: \"e3ad624e-065e-4802-ad9c-31cc5acb09b3\") " pod="calico-system/goldmane-7988f88666-8dh9r" Sep 13 00:13:26.909010 kubelet[3252]: I0913 00:13:26.594614 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj7hr\" (UniqueName: \"kubernetes.io/projected/e3ad624e-065e-4802-ad9c-31cc5acb09b3-kube-api-access-zj7hr\") pod \"goldmane-7988f88666-8dh9r\" (UID: \"e3ad624e-065e-4802-ad9c-31cc5acb09b3\") " pod="calico-system/goldmane-7988f88666-8dh9r" Sep 13 00:13:26.909010 kubelet[3252]: I0913 00:13:26.594640 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-calico-apiserver-certs\") pod \"calico-apiserver-5b854d5954-qtqrp\" (UID: \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\") " pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" Sep 13 00:13:26.462621 systemd[1]: Created slice kubepods-besteffort-pode3ad624e_065e_4802_ad9c_31cc5acb09b3.slice - libcontainer container kubepods-besteffort-pode3ad624e_065e_4802_ad9c_31cc5acb09b3.slice. Sep 13 00:13:26.909289 kubelet[3252]: I0913 00:13:26.594663 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswct\" (UniqueName: \"kubernetes.io/projected/c5cc334d-ce94-427b-92b7-c5a6c28738bf-kube-api-access-wswct\") pod \"calico-apiserver-fc4d7f994-ts7vf\" (UID: \"c5cc334d-ce94-427b-92b7-c5a6c28738bf\") " pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" Sep 13 00:13:26.909289 kubelet[3252]: I0913 00:13:26.594688 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bm7\" (UniqueName: \"kubernetes.io/projected/5dc69410-9fe3-44c7-9677-e9d7c8975f57-kube-api-access-s9bm7\") pod \"calico-apiserver-5b854d5954-dnf7s\" (UID: \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\") " pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" Sep 13 00:13:26.909289 kubelet[3252]: I0913 00:13:26.594709 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3ad624e-065e-4802-ad9c-31cc5acb09b3-goldmane-ca-bundle\") pod \"goldmane-7988f88666-8dh9r\" (UID: \"e3ad624e-065e-4802-ad9c-31cc5acb09b3\") " pod="calico-system/goldmane-7988f88666-8dh9r" Sep 13 00:13:26.909289 kubelet[3252]: I0913 00:13:26.594729 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gwm\" (UniqueName: \"kubernetes.io/projected/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-kube-api-access-98gwm\") pod \"whisker-7d4cc886d7-krggt\" (UID: \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\") " pod="calico-system/whisker-7d4cc886d7-krggt" Sep 13 00:13:26.909289 kubelet[3252]: I0913 00:13:26.594751 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/878e2756-bf9e-485f-b59b-c9371e461113-tigera-ca-bundle\") pod \"calico-kube-controllers-6c967b48b-ks4c2\" (UID: \"878e2756-bf9e-485f-b59b-c9371e461113\") " pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" Sep 13 00:13:26.472430 systemd[1]: Created slice kubepods-besteffort-podcec7c8d9_92bc_4074_ba92_6e1593b9ed77.slice - libcontainer container kubepods-besteffort-podcec7c8d9_92bc_4074_ba92_6e1593b9ed77.slice. Sep 13 00:13:26.478656 systemd[1]: Created slice kubepods-besteffort-podc5cc334d_ce94_427b_92b7_c5a6c28738bf.slice - libcontainer container kubepods-besteffort-podc5cc334d_ce94_427b_92b7_c5a6c28738bf.slice. Sep 13 00:13:26.487946 systemd[1]: Created slice kubepods-besteffort-pod5dc69410_9fe3_44c7_9677_e9d7c8975f57.slice - libcontainer container kubepods-besteffort-pod5dc69410_9fe3_44c7_9677_e9d7c8975f57.slice. Sep 13 00:13:26.493152 systemd[1]: Created slice kubepods-besteffort-podbb2e6fe7_b915_4b49_8446_cefe39dcaa20.slice - libcontainer container kubepods-besteffort-podbb2e6fe7_b915_4b49_8446_cefe39dcaa20.slice. Sep 13 00:13:27.199817 systemd[1]: Created slice kubepods-besteffort-podf346ff52_85ec_4854_bcc7_81887dda8d38.slice - libcontainer container kubepods-besteffort-podf346ff52_85ec_4854_bcc7_81887dda8d38.slice. Sep 13 00:13:27.202576 containerd[1720]: time="2025-09-13T00:13:27.202536546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9bxj,Uid:f346ff52-85ec-4854-bcc7-81887dda8d38,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:27.216024 containerd[1720]: time="2025-09-13T00:13:27.215746478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c967b48b-ks4c2,Uid:878e2756-bf9e-485f-b59b-c9371e461113,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:27.217853 containerd[1720]: time="2025-09-13T00:13:27.217818415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n4vhw,Uid:cf31218c-978b-4717-b2c0-67e187ff41da,Namespace:kube-system,Attempt:0,}" Sep 13 00:13:27.219730 containerd[1720]: time="2025-09-13T00:13:27.219525845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pn2sm,Uid:a930d21c-bb14-4796-bdaa-457b525327fa,Namespace:kube-system,Attempt:0,}" Sep 13 00:13:27.219730 containerd[1720]: time="2025-09-13T00:13:27.219590146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4cc886d7-krggt,Uid:bb2e6fe7-b915-4b49-8446-cefe39dcaa20,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:27.520767 containerd[1720]: time="2025-09-13T00:13:27.520639445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8dh9r,Uid:e3ad624e-065e-4802-ad9c-31cc5acb09b3,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:27.563063 containerd[1720]: time="2025-09-13T00:13:27.563002691Z" level=info msg="shim disconnected" id=498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104 namespace=k8s.io Sep 13 00:13:27.564587 containerd[1720]: time="2025-09-13T00:13:27.563073292Z" level=warning msg="cleaning up after shim disconnected" id=498dbc92299790fe55d090eb691f75481caeffd460d15e6f92b3d1e177fc3104 namespace=k8s.io Sep 13 00:13:27.564587 containerd[1720]: time="2025-09-13T00:13:27.563085193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:13:27.712951 kubelet[3252]: E0913 00:13:27.710409 3252 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.712951 kubelet[3252]: E0913 00:13:27.710456 3252 projected.go:194] Error preparing data for projected volume kube-api-access-s9bm7 for pod calico-apiserver/calico-apiserver-5b854d5954-dnf7s: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.712951 kubelet[3252]: E0913 00:13:27.712385 3252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dc69410-9fe3-44c7-9677-e9d7c8975f57-kube-api-access-s9bm7 podName:5dc69410-9fe3-44c7-9677-e9d7c8975f57 nodeName:}" failed. No retries permitted until 2025-09-13 00:13:28.21235652 +0000 UTC m=+33.114387938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s9bm7" (UniqueName: "kubernetes.io/projected/5dc69410-9fe3-44c7-9677-e9d7c8975f57-kube-api-access-s9bm7") pod "calico-apiserver-5b854d5954-dnf7s" (UID: "5dc69410-9fe3-44c7-9677-e9d7c8975f57") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.712951 kubelet[3252]: E0913 00:13:27.712279 3252 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.712951 kubelet[3252]: E0913 00:13:27.712658 3252 projected.go:194] Error preparing data for projected volume kube-api-access-wswct for pod calico-apiserver/calico-apiserver-fc4d7f994-ts7vf: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.713445 kubelet[3252]: E0913 00:13:27.712700 3252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5cc334d-ce94-427b-92b7-c5a6c28738bf-kube-api-access-wswct podName:c5cc334d-ce94-427b-92b7-c5a6c28738bf nodeName:}" failed. No retries permitted until 2025-09-13 00:13:28.212685526 +0000 UTC m=+33.114716944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wswct" (UniqueName: "kubernetes.io/projected/c5cc334d-ce94-427b-92b7-c5a6c28738bf-kube-api-access-wswct") pod "calico-apiserver-fc4d7f994-ts7vf" (UID: "c5cc334d-ce94-427b-92b7-c5a6c28738bf") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.720381 kubelet[3252]: E0913 00:13:27.718759 3252 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.720381 kubelet[3252]: E0913 00:13:27.718806 3252 projected.go:194] Error preparing data for projected volume kube-api-access-5k6b8 for pod calico-apiserver/calico-apiserver-5b854d5954-qtqrp: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.720381 kubelet[3252]: E0913 00:13:27.718870 3252 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-kube-api-access-5k6b8 podName:cec7c8d9-92bc-4074-ba92-6e1593b9ed77 nodeName:}" failed. No retries permitted until 2025-09-13 00:13:28.218849835 +0000 UTC m=+33.120881253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5k6b8" (UniqueName: "kubernetes.io/projected/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-kube-api-access-5k6b8") pod "calico-apiserver-5b854d5954-qtqrp" (UID: "cec7c8d9-92bc-4074-ba92-6e1593b9ed77") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:13:27.942258 containerd[1720]: time="2025-09-13T00:13:27.942197366Z" level=error msg="Failed to destroy network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.944626 containerd[1720]: time="2025-09-13T00:13:27.944565708Z" level=error msg="encountered an error cleaning up failed sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.944881 containerd[1720]: time="2025-09-13T00:13:27.944729411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c967b48b-ks4c2,Uid:878e2756-bf9e-485f-b59b-c9371e461113,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.946589 kubelet[3252]: E0913 00:13:27.945554 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.946589 kubelet[3252]: E0913 00:13:27.945644 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" Sep 13 00:13:27.946589 kubelet[3252]: E0913 00:13:27.945672 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" Sep 13 00:13:27.946901 kubelet[3252]: E0913 00:13:27.945791 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c967b48b-ks4c2_calico-system(878e2756-bf9e-485f-b59b-c9371e461113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c967b48b-ks4c2_calico-system(878e2756-bf9e-485f-b59b-c9371e461113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" podUID="878e2756-bf9e-485f-b59b-c9371e461113" Sep 13 00:13:27.955066 containerd[1720]: time="2025-09-13T00:13:27.955011792Z" level=error msg="Failed to destroy network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.955815 containerd[1720]: time="2025-09-13T00:13:27.955775105Z" level=error msg="encountered an error cleaning up failed sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.956002 containerd[1720]: time="2025-09-13T00:13:27.955968709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4cc886d7-krggt,Uid:bb2e6fe7-b915-4b49-8446-cefe39dcaa20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.956390 kubelet[3252]: E0913 00:13:27.956355 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.956696 kubelet[3252]: E0913 00:13:27.956548 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4cc886d7-krggt" Sep 13 00:13:27.956696 kubelet[3252]: E0913 00:13:27.956581 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4cc886d7-krggt" Sep 13 00:13:27.956696 kubelet[3252]: E0913 00:13:27.956650 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d4cc886d7-krggt_calico-system(bb2e6fe7-b915-4b49-8446-cefe39dcaa20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d4cc886d7-krggt_calico-system(bb2e6fe7-b915-4b49-8446-cefe39dcaa20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4cc886d7-krggt" podUID="bb2e6fe7-b915-4b49-8446-cefe39dcaa20" Sep 13 00:13:27.970807 containerd[1720]: time="2025-09-13T00:13:27.970750769Z" level=error msg="Failed to destroy network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.971576 containerd[1720]: time="2025-09-13T00:13:27.971498882Z" level=error msg="encountered an error cleaning up failed sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.971927 containerd[1720]: time="2025-09-13T00:13:27.971792187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n4vhw,Uid:cf31218c-978b-4717-b2c0-67e187ff41da,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.972424 kubelet[3252]: E0913 00:13:27.972369 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.972532 kubelet[3252]: E0913 00:13:27.972455 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n4vhw" Sep 13 00:13:27.972532 kubelet[3252]: E0913 00:13:27.972493 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-n4vhw" Sep 13 00:13:27.972627 kubelet[3252]: E0913 00:13:27.972552 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-n4vhw_kube-system(cf31218c-978b-4717-b2c0-67e187ff41da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-n4vhw_kube-system(cf31218c-978b-4717-b2c0-67e187ff41da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n4vhw" podUID="cf31218c-978b-4717-b2c0-67e187ff41da" Sep 13 00:13:27.978184 containerd[1720]: time="2025-09-13T00:13:27.978034097Z" level=error msg="Failed to destroy network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.980615 containerd[1720]: time="2025-09-13T00:13:27.980570642Z" level=error msg="encountered an error cleaning up failed sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.980722 containerd[1720]: time="2025-09-13T00:13:27.980648143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9bxj,Uid:f346ff52-85ec-4854-bcc7-81887dda8d38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.981747 kubelet[3252]: E0913 00:13:27.981708 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.981853 kubelet[3252]: E0913 00:13:27.981769 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:27.981853 kubelet[3252]: E0913 00:13:27.981796 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l9bxj" Sep 13 00:13:27.981943 kubelet[3252]: E0913 00:13:27.981840 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l9bxj_calico-system(f346ff52-85ec-4854-bcc7-81887dda8d38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l9bxj_calico-system(f346ff52-85ec-4854-bcc7-81887dda8d38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:27.986863 containerd[1720]: time="2025-09-13T00:13:27.986679949Z" level=error msg="Failed to destroy network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.987128 containerd[1720]: time="2025-09-13T00:13:27.987009455Z" level=error msg="encountered an error cleaning up failed sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.987128 containerd[1720]: time="2025-09-13T00:13:27.987078056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8dh9r,Uid:e3ad624e-065e-4802-ad9c-31cc5acb09b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.987390 kubelet[3252]: E0913 00:13:27.987287 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.987390 kubelet[3252]: E0913 00:13:27.987338 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-8dh9r" Sep 13 00:13:27.987390 kubelet[3252]: E0913 00:13:27.987365 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-8dh9r" Sep 13 00:13:27.987578 kubelet[3252]: E0913 00:13:27.987412 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-8dh9r_calico-system(e3ad624e-065e-4802-ad9c-31cc5acb09b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-8dh9r_calico-system(e3ad624e-065e-4802-ad9c-31cc5acb09b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-8dh9r" podUID="e3ad624e-065e-4802-ad9c-31cc5acb09b3" Sep 13 00:13:27.990635 containerd[1720]: time="2025-09-13T00:13:27.990597218Z" level=error msg="Failed to destroy network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.991002 containerd[1720]: time="2025-09-13T00:13:27.990875023Z" level=error msg="encountered an error cleaning up failed sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.991002 containerd[1720]: time="2025-09-13T00:13:27.990934724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pn2sm,Uid:a930d21c-bb14-4796-bdaa-457b525327fa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.991252 kubelet[3252]: E0913 00:13:27.991209 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:27.991330 kubelet[3252]: E0913 00:13:27.991276 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pn2sm" Sep 13 00:13:27.991330 kubelet[3252]: E0913 00:13:27.991300 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pn2sm" Sep 13 00:13:27.991461 kubelet[3252]: E0913 00:13:27.991365 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pn2sm_kube-system(a930d21c-bb14-4796-bdaa-457b525327fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pn2sm_kube-system(a930d21c-bb14-4796-bdaa-457b525327fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pn2sm" podUID="a930d21c-bb14-4796-bdaa-457b525327fa" Sep 13 00:13:28.320654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4-shm.mount: Deactivated successfully. Sep 13 00:13:28.321137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b-shm.mount: Deactivated successfully. Sep 13 00:13:28.321355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f-shm.mount: Deactivated successfully. Sep 13 00:13:28.332636 kubelet[3252]: I0913 00:13:28.332597 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:28.334797 containerd[1720]: time="2025-09-13T00:13:28.333585556Z" level=info msg="StopPodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\"" Sep 13 00:13:28.334797 containerd[1720]: time="2025-09-13T00:13:28.333798760Z" level=info msg="Ensure that sandbox d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53 in task-service has been cleanup successfully" Sep 13 00:13:28.337477 kubelet[3252]: I0913 00:13:28.335857 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:28.341896 containerd[1720]: time="2025-09-13T00:13:28.338341540Z" level=info msg="StopPodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\"" Sep 13 00:13:28.341896 containerd[1720]: time="2025-09-13T00:13:28.341594097Z" level=info msg="Ensure that sandbox 22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f in task-service has been cleanup successfully" Sep 13 00:13:28.347769 kubelet[3252]: I0913 00:13:28.347734 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:28.349627 containerd[1720]: time="2025-09-13T00:13:28.349574238Z" level=info msg="StopPodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\"" Sep 13 00:13:28.350232 containerd[1720]: time="2025-09-13T00:13:28.349951744Z" level=info msg="Ensure that sandbox 2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f in task-service has been cleanup successfully" Sep 13 00:13:28.350759 kubelet[3252]: I0913 00:13:28.350682 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:28.351582 containerd[1720]: time="2025-09-13T00:13:28.351194166Z" level=info msg="StopPodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\"" Sep 13 00:13:28.351865 containerd[1720]: time="2025-09-13T00:13:28.351837878Z" level=info msg="Ensure that sandbox df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94 in task-service has been cleanup successfully" Sep 13 00:13:28.356988 kubelet[3252]: I0913 00:13:28.356962 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:28.359369 containerd[1720]: time="2025-09-13T00:13:28.359064805Z" level=info msg="StopPodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\"" Sep 13 00:13:28.359677 kubelet[3252]: I0913 00:13:28.359657 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:28.360483 containerd[1720]: time="2025-09-13T00:13:28.360435029Z" level=info msg="Ensure that sandbox 983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4 in task-service has been cleanup successfully" Sep 13 00:13:28.362417 containerd[1720]: time="2025-09-13T00:13:28.362392263Z" level=info msg="StopPodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\"" Sep 13 00:13:28.362735 containerd[1720]: time="2025-09-13T00:13:28.362675768Z" level=info msg="Ensure that sandbox dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b in task-service has been cleanup successfully" Sep 13 00:13:28.376628 containerd[1720]: time="2025-09-13T00:13:28.376589513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:13:28.414082 containerd[1720]: time="2025-09-13T00:13:28.413401161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-dnf7s,Uid:5dc69410-9fe3-44c7-9677-e9d7c8975f57,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:13:28.421365 containerd[1720]: time="2025-09-13T00:13:28.421101897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4d7f994-ts7vf,Uid:c5cc334d-ce94-427b-92b7-c5a6c28738bf,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:13:28.435367 containerd[1720]: time="2025-09-13T00:13:28.435135344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-qtqrp,Uid:cec7c8d9-92bc-4074-ba92-6e1593b9ed77,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:13:28.463314 containerd[1720]: time="2025-09-13T00:13:28.463259539Z" level=error msg="StopPodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" failed" error="failed to destroy network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.464044 kubelet[3252]: E0913 00:13:28.463746 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:28.464044 kubelet[3252]: E0913 00:13:28.463836 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f"} Sep 13 00:13:28.464044 kubelet[3252]: E0913 00:13:28.463915 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a930d21c-bb14-4796-bdaa-457b525327fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:28.464044 kubelet[3252]: E0913 00:13:28.463961 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a930d21c-bb14-4796-bdaa-457b525327fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pn2sm" podUID="a930d21c-bb14-4796-bdaa-457b525327fa" Sep 13 00:13:28.471119 containerd[1720]: time="2025-09-13T00:13:28.470606668Z" level=error msg="StopPodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" failed" error="failed to destroy network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.471241 kubelet[3252]: E0913 00:13:28.470853 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:28.471241 kubelet[3252]: E0913 00:13:28.470909 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4"} Sep 13 00:13:28.471241 kubelet[3252]: E0913 00:13:28.470953 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"878e2756-bf9e-485f-b59b-c9371e461113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:28.471241 kubelet[3252]: E0913 00:13:28.471001 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"878e2756-bf9e-485f-b59b-c9371e461113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" podUID="878e2756-bf9e-485f-b59b-c9371e461113" Sep 13 00:13:28.486787 containerd[1720]: time="2025-09-13T00:13:28.486723952Z" level=error msg="StopPodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" failed" error="failed to destroy network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.487492 kubelet[3252]: E0913 00:13:28.487265 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:28.487492 kubelet[3252]: E0913 00:13:28.487338 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53"} Sep 13 00:13:28.487492 kubelet[3252]: E0913 00:13:28.487383 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3ad624e-065e-4802-ad9c-31cc5acb09b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:28.487492 kubelet[3252]: E0913 00:13:28.487429 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3ad624e-065e-4802-ad9c-31cc5acb09b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-8dh9r" podUID="e3ad624e-065e-4802-ad9c-31cc5acb09b3" Sep 13 00:13:28.492507 containerd[1720]: time="2025-09-13T00:13:28.491602738Z" level=error msg="StopPodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" failed" error="failed to destroy network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.492900 kubelet[3252]: E0913 00:13:28.492093 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:28.492900 kubelet[3252]: E0913 00:13:28.492153 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f"} Sep 13 00:13:28.492900 kubelet[3252]: E0913 00:13:28.492191 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:28.492900 kubelet[3252]: E0913 00:13:28.492221 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4cc886d7-krggt" podUID="bb2e6fe7-b915-4b49-8446-cefe39dcaa20" Sep 13 00:13:28.505043 containerd[1720]: time="2025-09-13T00:13:28.502622632Z" level=error msg="StopPodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" failed" error="failed to destroy network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.505043 containerd[1720]: time="2025-09-13T00:13:28.503240743Z" level=error msg="StopPodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" failed" error="failed to destroy network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.505213 kubelet[3252]: E0913 00:13:28.504759 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:28.505213 kubelet[3252]: E0913 00:13:28.504810 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94"} Sep 13 00:13:28.505213 kubelet[3252]: E0913 00:13:28.504857 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf31218c-978b-4717-b2c0-67e187ff41da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:28.505213 kubelet[3252]: E0913 00:13:28.504890 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf31218c-978b-4717-b2c0-67e187ff41da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-n4vhw" podUID="cf31218c-978b-4717-b2c0-67e187ff41da" Sep 13 00:13:28.505567 kubelet[3252]: E0913 00:13:28.504931 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:28.505567 kubelet[3252]: E0913 00:13:28.504954 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b"} Sep 13 00:13:28.505567 kubelet[3252]: E0913 00:13:28.504980 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f346ff52-85ec-4854-bcc7-81887dda8d38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:28.505567 kubelet[3252]: E0913 00:13:28.505003 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f346ff52-85ec-4854-bcc7-81887dda8d38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l9bxj" podUID="f346ff52-85ec-4854-bcc7-81887dda8d38" Sep 13 00:13:28.624509 containerd[1720]: time="2025-09-13T00:13:28.622749947Z" level=error msg="Failed to destroy network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.626965 containerd[1720]: time="2025-09-13T00:13:28.625791000Z" level=error msg="encountered an error cleaning up failed sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.626965 containerd[1720]: time="2025-09-13T00:13:28.625868301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-dnf7s,Uid:5dc69410-9fe3-44c7-9677-e9d7c8975f57,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.627165 kubelet[3252]: E0913 00:13:28.626547 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.627165 kubelet[3252]: E0913 00:13:28.626617 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" Sep 13 00:13:28.627165 kubelet[3252]: E0913 00:13:28.626644 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" Sep 13 00:13:28.627316 kubelet[3252]: E0913 00:13:28.626699 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b854d5954-dnf7s_calico-apiserver(5dc69410-9fe3-44c7-9677-e9d7c8975f57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b854d5954-dnf7s_calico-apiserver(5dc69410-9fe3-44c7-9677-e9d7c8975f57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" podUID="5dc69410-9fe3-44c7-9677-e9d7c8975f57" Sep 13 00:13:28.653249 containerd[1720]: time="2025-09-13T00:13:28.652881277Z" level=error msg="Failed to destroy network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.653607 containerd[1720]: time="2025-09-13T00:13:28.653452187Z" level=error msg="Failed to destroy network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654082 containerd[1720]: time="2025-09-13T00:13:28.653599290Z" level=error msg="encountered an error cleaning up failed sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654082 containerd[1720]: time="2025-09-13T00:13:28.653885495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-qtqrp,Uid:cec7c8d9-92bc-4074-ba92-6e1593b9ed77,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654082 containerd[1720]: time="2025-09-13T00:13:28.653974696Z" level=error msg="encountered an error cleaning up failed sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654082 containerd[1720]: time="2025-09-13T00:13:28.654018697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4d7f994-ts7vf,Uid:c5cc334d-ce94-427b-92b7-c5a6c28738bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654353 kubelet[3252]: E0913 00:13:28.654235 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654353 kubelet[3252]: E0913 00:13:28.654156 3252 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:28.654353 kubelet[3252]: E0913 00:13:28.654323 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" Sep 13 00:13:28.654544 kubelet[3252]: E0913 00:13:28.654372 3252 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" Sep 13 00:13:28.654544 kubelet[3252]: E0913 00:13:28.654398 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" Sep 13 00:13:28.654544 kubelet[3252]: E0913 00:13:28.654450 3252 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" Sep 13 00:13:28.654677 kubelet[3252]: E0913 00:13:28.654598 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b854d5954-qtqrp_calico-apiserver(cec7c8d9-92bc-4074-ba92-6e1593b9ed77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b854d5954-qtqrp_calico-apiserver(cec7c8d9-92bc-4074-ba92-6e1593b9ed77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" podUID="cec7c8d9-92bc-4074-ba92-6e1593b9ed77" Sep 13 00:13:28.654677 kubelet[3252]: E0913 00:13:28.654526 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fc4d7f994-ts7vf_calico-apiserver(c5cc334d-ce94-427b-92b7-c5a6c28738bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fc4d7f994-ts7vf_calico-apiserver(c5cc334d-ce94-427b-92b7-c5a6c28738bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" podUID="c5cc334d-ce94-427b-92b7-c5a6c28738bf" Sep 13 00:13:29.313092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993-shm.mount: Deactivated successfully. Sep 13 00:13:29.313213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466-shm.mount: Deactivated successfully. Sep 13 00:13:29.373426 kubelet[3252]: I0913 00:13:29.373390 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:29.375636 containerd[1720]: time="2025-09-13T00:13:29.374220975Z" level=info msg="StopPodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\"" Sep 13 00:13:29.375636 containerd[1720]: time="2025-09-13T00:13:29.374421879Z" level=info msg="Ensure that sandbox d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c in task-service has been cleanup successfully" Sep 13 00:13:29.376619 kubelet[3252]: I0913 00:13:29.376593 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:29.378061 containerd[1720]: time="2025-09-13T00:13:29.378033742Z" level=info msg="StopPodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\"" Sep 13 00:13:29.379119 containerd[1720]: time="2025-09-13T00:13:29.379091761Z" level=info msg="Ensure that sandbox 0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466 in task-service has been cleanup successfully" Sep 13 00:13:29.380934 kubelet[3252]: I0913 00:13:29.380896 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:29.382741 containerd[1720]: time="2025-09-13T00:13:29.382710525Z" level=info msg="StopPodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\"" Sep 13 00:13:29.382953 containerd[1720]: time="2025-09-13T00:13:29.382928428Z" level=info msg="Ensure that sandbox 3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993 in task-service has been cleanup successfully" Sep 13 00:13:29.422264 containerd[1720]: time="2025-09-13T00:13:29.421988516Z" level=error msg="StopPodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" failed" error="failed to destroy network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:29.423046 kubelet[3252]: E0913 00:13:29.422665 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:29.423046 kubelet[3252]: E0913 00:13:29.422728 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c"} Sep 13 00:13:29.423046 kubelet[3252]: E0913 00:13:29.422969 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:29.423046 kubelet[3252]: E0913 00:13:29.423008 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" podUID="cec7c8d9-92bc-4074-ba92-6e1593b9ed77" Sep 13 00:13:29.439804 containerd[1720]: time="2025-09-13T00:13:29.439585626Z" level=error msg="StopPodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" failed" error="failed to destroy network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:29.440305 kubelet[3252]: E0913 00:13:29.440125 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:29.440305 kubelet[3252]: E0913 00:13:29.440185 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466"} Sep 13 00:13:29.440305 kubelet[3252]: E0913 00:13:29.440228 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:29.440305 kubelet[3252]: E0913 00:13:29.440259 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" podUID="5dc69410-9fe3-44c7-9677-e9d7c8975f57" Sep 13 00:13:29.442369 containerd[1720]: time="2025-09-13T00:13:29.442325774Z" level=error msg="StopPodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" failed" error="failed to destroy network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:13:29.442594 kubelet[3252]: E0913 00:13:29.442553 3252 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:29.442733 kubelet[3252]: E0913 00:13:29.442606 3252 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993"} Sep 13 00:13:29.442733 kubelet[3252]: E0913 00:13:29.442645 3252 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5cc334d-ce94-427b-92b7-c5a6c28738bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:13:29.442733 kubelet[3252]: E0913 00:13:29.442674 3252 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5cc334d-ce94-427b-92b7-c5a6c28738bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" podUID="c5cc334d-ce94-427b-92b7-c5a6c28738bf" Sep 13 00:13:34.601074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380055035.mount: Deactivated successfully. Sep 13 00:13:34.641134 containerd[1720]: time="2025-09-13T00:13:34.641070347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:34.643596 containerd[1720]: time="2025-09-13T00:13:34.643433587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:13:34.646501 containerd[1720]: time="2025-09-13T00:13:34.646381136Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:34.650623 containerd[1720]: time="2025-09-13T00:13:34.650503906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:34.651299 containerd[1720]: time="2025-09-13T00:13:34.651130716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.274492902s" Sep 13 00:13:34.651299 containerd[1720]: time="2025-09-13T00:13:34.651176817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:13:34.668759 containerd[1720]: time="2025-09-13T00:13:34.668548009Z" level=info msg="CreateContainer within sandbox \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:13:34.707793 containerd[1720]: time="2025-09-13T00:13:34.707747268Z" level=info msg="CreateContainer within sandbox \"17f6b9b19baf7d8811df3320a649167ab42859984c620fcaece8af46830efbc0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae50d1e5cf490afc35e70118264072c3184d79de8f6f1a89f3d7d16f6605d8af\"" Sep 13 00:13:34.708613 containerd[1720]: time="2025-09-13T00:13:34.708562482Z" level=info msg="StartContainer for \"ae50d1e5cf490afc35e70118264072c3184d79de8f6f1a89f3d7d16f6605d8af\"" Sep 13 00:13:34.738650 systemd[1]: Started cri-containerd-ae50d1e5cf490afc35e70118264072c3184d79de8f6f1a89f3d7d16f6605d8af.scope - libcontainer container ae50d1e5cf490afc35e70118264072c3184d79de8f6f1a89f3d7d16f6605d8af. Sep 13 00:13:34.772795 containerd[1720]: time="2025-09-13T00:13:34.772750761Z" level=info msg="StartContainer for \"ae50d1e5cf490afc35e70118264072c3184d79de8f6f1a89f3d7d16f6605d8af\" returns successfully" Sep 13 00:13:35.240275 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:13:35.240434 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:13:35.390794 containerd[1720]: time="2025-09-13T00:13:35.390733252Z" level=info msg="StopPodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\"" Sep 13 00:13:35.484891 kubelet[3252]: I0913 00:13:35.484809 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hwdj7" podStartSLOduration=1.692640983 podStartE2EDuration="20.484787534s" podCreationTimestamp="2025-09-13 00:13:15 +0000 UTC" firstStartedPulling="2025-09-13 00:13:15.859956081 +0000 UTC m=+20.761987599" lastFinishedPulling="2025-09-13 00:13:34.652102732 +0000 UTC m=+39.554134150" observedRunningTime="2025-09-13 00:13:35.482663998 +0000 UTC m=+40.384695416" watchObservedRunningTime="2025-09-13 00:13:35.484787534 +0000 UTC m=+40.386818952" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.547 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.547 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" iface="eth0" netns="/var/run/netns/cni-1951ce5e-cafd-63b7-0b63-8dbb337ac0dc" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.548 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" iface="eth0" netns="/var/run/netns/cni-1951ce5e-cafd-63b7-0b63-8dbb337ac0dc" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.548 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" iface="eth0" netns="/var/run/netns/cni-1951ce5e-cafd-63b7-0b63-8dbb337ac0dc" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.548 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.548 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.598 [INFO][4528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.599 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.600 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.619 [WARNING][4528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.619 [INFO][4528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.623 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:35.630670 containerd[1720]: 2025-09-13 00:13:35.628 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:35.631720 containerd[1720]: time="2025-09-13T00:13:35.631674803Z" level=info msg="TearDown network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" successfully" Sep 13 00:13:35.631720 containerd[1720]: time="2025-09-13T00:13:35.631719204Z" level=info msg="StopPodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" returns successfully" Sep 13 00:13:35.640802 systemd[1]: run-netns-cni\x2d1951ce5e\x2dcafd\x2d63b7\x2d0b63\x2d8dbb337ac0dc.mount: Deactivated successfully. Sep 13 00:13:35.767889 kubelet[3252]: I0913 00:13:35.767779 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-backend-key-pair\") pod \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\" (UID: \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\") " Sep 13 00:13:35.769484 kubelet[3252]: I0913 00:13:35.769075 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-ca-bundle\") pod \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\" (UID: \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\") " Sep 13 00:13:35.769484 kubelet[3252]: I0913 00:13:35.769127 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98gwm\" (UniqueName: \"kubernetes.io/projected/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-kube-api-access-98gwm\") pod \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\" (UID: \"bb2e6fe7-b915-4b49-8446-cefe39dcaa20\") " Sep 13 00:13:35.769484 kubelet[3252]: I0913 00:13:35.769419 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bb2e6fe7-b915-4b49-8446-cefe39dcaa20" (UID: "bb2e6fe7-b915-4b49-8446-cefe39dcaa20"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:13:35.773432 kubelet[3252]: I0913 00:13:35.773390 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bb2e6fe7-b915-4b49-8446-cefe39dcaa20" (UID: "bb2e6fe7-b915-4b49-8446-cefe39dcaa20"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:13:35.777504 kubelet[3252]: I0913 00:13:35.777443 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-kube-api-access-98gwm" (OuterVolumeSpecName: "kube-api-access-98gwm") pod "bb2e6fe7-b915-4b49-8446-cefe39dcaa20" (UID: "bb2e6fe7-b915-4b49-8446-cefe39dcaa20"). InnerVolumeSpecName "kube-api-access-98gwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:13:35.779006 systemd[1]: var-lib-kubelet-pods-bb2e6fe7\x2db915\x2d4b49\x2d8446\x2dcefe39dcaa20-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:13:35.784095 systemd[1]: var-lib-kubelet-pods-bb2e6fe7\x2db915\x2d4b49\x2d8446\x2dcefe39dcaa20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98gwm.mount: Deactivated successfully. Sep 13 00:13:35.869894 kubelet[3252]: I0913 00:13:35.869849 3252 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-ca-bundle\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:13:35.869894 kubelet[3252]: I0913 00:13:35.869888 3252 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98gwm\" (UniqueName: \"kubernetes.io/projected/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-kube-api-access-98gwm\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:13:35.869894 kubelet[3252]: I0913 00:13:35.869901 3252 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb2e6fe7-b915-4b49-8446-cefe39dcaa20-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:13:36.423260 systemd[1]: Removed slice kubepods-besteffort-podbb2e6fe7_b915_4b49_8446_cefe39dcaa20.slice - libcontainer container kubepods-besteffort-podbb2e6fe7_b915_4b49_8446_cefe39dcaa20.slice. Sep 13 00:13:36.526956 systemd[1]: Created slice kubepods-besteffort-pod7504302e_c535_49f7_87c7_6d79327bb046.slice - libcontainer container kubepods-besteffort-pod7504302e_c535_49f7_87c7_6d79327bb046.slice. Sep 13 00:13:36.675382 kubelet[3252]: I0913 00:13:36.675228 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhwd8\" (UniqueName: \"kubernetes.io/projected/7504302e-c535-49f7-87c7-6d79327bb046-kube-api-access-nhwd8\") pod \"whisker-659d785666-5b4fx\" (UID: \"7504302e-c535-49f7-87c7-6d79327bb046\") " pod="calico-system/whisker-659d785666-5b4fx" Sep 13 00:13:36.675382 kubelet[3252]: I0913 00:13:36.675290 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7504302e-c535-49f7-87c7-6d79327bb046-whisker-backend-key-pair\") pod \"whisker-659d785666-5b4fx\" (UID: \"7504302e-c535-49f7-87c7-6d79327bb046\") " pod="calico-system/whisker-659d785666-5b4fx" Sep 13 00:13:36.675382 kubelet[3252]: I0913 00:13:36.675320 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7504302e-c535-49f7-87c7-6d79327bb046-whisker-ca-bundle\") pod \"whisker-659d785666-5b4fx\" (UID: \"7504302e-c535-49f7-87c7-6d79327bb046\") " pod="calico-system/whisker-659d785666-5b4fx" Sep 13 00:13:36.830978 containerd[1720]: time="2025-09-13T00:13:36.830934368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659d785666-5b4fx,Uid:7504302e-c535-49f7-87c7-6d79327bb046,Namespace:calico-system,Attempt:0,}" Sep 13 00:13:37.141384 systemd-networkd[1578]: calic18831240c4: Link UP Sep 13 00:13:37.143369 systemd-networkd[1578]: calic18831240c4: Gained carrier Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:36.977 [INFO][4636] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:36.993 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0 whisker-659d785666- calico-system 7504302e-c535-49f7-87c7-6d79327bb046 895 0 2025-09-13 00:13:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:659d785666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 whisker-659d785666-5b4fx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic18831240c4 [] [] }} ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:36.993 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.054 [INFO][4677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" HandleID="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.054 [INFO][4677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" HandleID="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5da0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-78cb87e672", "pod":"whisker-659d785666-5b4fx", "timestamp":"2025-09-13 00:13:37.054172222 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.055 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.055 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.055 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.070 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.077 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.081 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.083 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.086 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.086 [INFO][4677] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.087 [INFO][4677] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35 Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.092 [INFO][4677] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.104 [INFO][4677] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.193/26] block=192.168.107.192/26 handle="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.104 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.193/26] handle="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.104 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:37.169718 containerd[1720]: 2025-09-13 00:13:37.104 [INFO][4677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.193/26] IPv6=[] ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" HandleID="k8s-pod-network.be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.170785 containerd[1720]: 2025-09-13 00:13:37.108 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0", GenerateName:"whisker-659d785666-", Namespace:"calico-system", SelfLink:"", UID:"7504302e-c535-49f7-87c7-6d79327bb046", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"659d785666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"whisker-659d785666-5b4fx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic18831240c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:37.170785 containerd[1720]: 2025-09-13 00:13:37.109 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.193/32] ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.170785 containerd[1720]: 2025-09-13 00:13:37.109 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic18831240c4 ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.170785 containerd[1720]: 2025-09-13 00:13:37.143 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.170785 containerd[1720]: 2025-09-13 00:13:37.146 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0", GenerateName:"whisker-659d785666-", Namespace:"calico-system", SelfLink:"", UID:"7504302e-c535-49f7-87c7-6d79327bb046", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"659d785666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35", Pod:"whisker-659d785666-5b4fx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic18831240c4", MAC:"3a:7f:71:46:75:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:37.170785 containerd[1720]: 2025-09-13 00:13:37.166 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35" Namespace="calico-system" Pod="whisker-659d785666-5b4fx" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--659d785666--5b4fx-eth0" Sep 13 00:13:37.201302 kubelet[3252]: I0913 00:13:37.201242 3252 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb2e6fe7-b915-4b49-8446-cefe39dcaa20" path="/var/lib/kubelet/pods/bb2e6fe7-b915-4b49-8446-cefe39dcaa20/volumes" Sep 13 00:13:37.217994 containerd[1720]: time="2025-09-13T00:13:37.217644071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:37.217994 containerd[1720]: time="2025-09-13T00:13:37.217725472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:37.217994 containerd[1720]: time="2025-09-13T00:13:37.217765473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:37.217994 containerd[1720]: time="2025-09-13T00:13:37.217881975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:37.265429 systemd[1]: Started cri-containerd-be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35.scope - libcontainer container be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35. Sep 13 00:13:37.350325 containerd[1720]: time="2025-09-13T00:13:37.350258001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659d785666-5b4fx,Uid:7504302e-c535-49f7-87c7-6d79327bb046,Namespace:calico-system,Attempt:0,} returns sandbox id \"be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35\"" Sep 13 00:13:37.356076 containerd[1720]: time="2025-09-13T00:13:37.356031998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:13:38.479331 systemd-networkd[1578]: calic18831240c4: Gained IPv6LL Sep 13 00:13:38.619518 containerd[1720]: time="2025-09-13T00:13:38.618908254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:38.627524 containerd[1720]: time="2025-09-13T00:13:38.627283892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:13:38.632934 containerd[1720]: time="2025-09-13T00:13:38.631657065Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:38.639819 containerd[1720]: time="2025-09-13T00:13:38.639742899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:38.640722 containerd[1720]: time="2025-09-13T00:13:38.640580413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.283066191s" Sep 13 00:13:38.640722 containerd[1720]: time="2025-09-13T00:13:38.640623213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:13:38.643386 containerd[1720]: time="2025-09-13T00:13:38.643343859Z" level=info msg="CreateContainer within sandbox \"be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:13:38.689130 containerd[1720]: time="2025-09-13T00:13:38.689074416Z" level=info msg="CreateContainer within sandbox \"be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c36e38b20dce0e2153491accf4fd576529443ddd23c0c0af8c2fbc0f9345c8b3\"" Sep 13 00:13:38.692925 containerd[1720]: time="2025-09-13T00:13:38.691796661Z" level=info msg="StartContainer for \"c36e38b20dce0e2153491accf4fd576529443ddd23c0c0af8c2fbc0f9345c8b3\"" Sep 13 00:13:38.731663 systemd[1]: Started cri-containerd-c36e38b20dce0e2153491accf4fd576529443ddd23c0c0af8c2fbc0f9345c8b3.scope - libcontainer container c36e38b20dce0e2153491accf4fd576529443ddd23c0c0af8c2fbc0f9345c8b3. Sep 13 00:13:38.782234 containerd[1720]: time="2025-09-13T00:13:38.782173358Z" level=info msg="StartContainer for \"c36e38b20dce0e2153491accf4fd576529443ddd23c0c0af8c2fbc0f9345c8b3\" returns successfully" Sep 13 00:13:38.784625 containerd[1720]: time="2025-09-13T00:13:38.784257293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:13:39.195157 containerd[1720]: time="2025-09-13T00:13:39.194786893Z" level=info msg="StopPodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\"" Sep 13 00:13:39.195157 containerd[1720]: time="2025-09-13T00:13:39.195141398Z" level=info msg="StopPodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\"" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.265 [INFO][4823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.266 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" iface="eth0" netns="/var/run/netns/cni-f0d2a6ea-5834-8668-d0b3-b01e4d5eb65b" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.267 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" iface="eth0" netns="/var/run/netns/cni-f0d2a6ea-5834-8668-d0b3-b01e4d5eb65b" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.267 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" iface="eth0" netns="/var/run/netns/cni-f0d2a6ea-5834-8668-d0b3-b01e4d5eb65b" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.267 [INFO][4823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.267 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.312 [INFO][4837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.314 [INFO][4837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.314 [INFO][4837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.321 [WARNING][4837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.322 [INFO][4837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.324 [INFO][4837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:39.331715 containerd[1720]: 2025-09-13 00:13:39.327 [INFO][4823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:39.335108 containerd[1720]: time="2025-09-13T00:13:39.331833163Z" level=info msg="TearDown network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" successfully" Sep 13 00:13:39.335108 containerd[1720]: time="2025-09-13T00:13:39.331867063Z" level=info msg="StopPodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" returns successfully" Sep 13 00:13:39.338270 containerd[1720]: time="2025-09-13T00:13:39.335937031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n4vhw,Uid:cf31218c-978b-4717-b2c0-67e187ff41da,Namespace:kube-system,Attempt:1,}" Sep 13 00:13:39.336852 systemd[1]: run-netns-cni\x2df0d2a6ea\x2d5834\x2d8668\x2dd0b3\x2db01e4d5eb65b.mount: Deactivated successfully. Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.273 [INFO][4824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.273 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" iface="eth0" netns="/var/run/netns/cni-2a317353-f68d-1285-2861-07c9ad2fb4c2" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.273 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" iface="eth0" netns="/var/run/netns/cni-2a317353-f68d-1285-2861-07c9ad2fb4c2" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.273 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" iface="eth0" netns="/var/run/netns/cni-2a317353-f68d-1285-2861-07c9ad2fb4c2" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.274 [INFO][4824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.274 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.317 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.317 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.324 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.341 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.341 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.344 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:39.349564 containerd[1720]: 2025-09-13 00:13:39.347 [INFO][4824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:39.355216 containerd[1720]: time="2025-09-13T00:13:39.349721559Z" level=info msg="TearDown network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" successfully" Sep 13 00:13:39.355216 containerd[1720]: time="2025-09-13T00:13:39.349751359Z" level=info msg="StopPodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" returns successfully" Sep 13 00:13:39.355216 containerd[1720]: time="2025-09-13T00:13:39.352195600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c967b48b-ks4c2,Uid:878e2756-bf9e-485f-b59b-c9371e461113,Namespace:calico-system,Attempt:1,}" Sep 13 00:13:39.354210 systemd[1]: run-netns-cni\x2d2a317353\x2df68d\x2d1285\x2d2861\x2d07c9ad2fb4c2.mount: Deactivated successfully. Sep 13 00:13:39.640535 systemd-networkd[1578]: cali2fb3315507c: Link UP Sep 13 00:13:39.642661 systemd-networkd[1578]: cali2fb3315507c: Gained carrier Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.482 [INFO][4857] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.499 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0 coredns-7c65d6cfc9- kube-system cf31218c-978b-4717-b2c0-67e187ff41da 915 0 2025-09-13 00:13:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 coredns-7c65d6cfc9-n4vhw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fb3315507c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.499 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.567 [INFO][4882] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" HandleID="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.567 [INFO][4882] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" HandleID="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-78cb87e672", "pod":"coredns-7c65d6cfc9-n4vhw", "timestamp":"2025-09-13 00:13:39.565578634 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.567 [INFO][4882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.567 [INFO][4882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.567 [INFO][4882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.582 [INFO][4882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.590 [INFO][4882] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.597 [INFO][4882] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.600 [INFO][4882] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.603 [INFO][4882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.604 [INFO][4882] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.607 [INFO][4882] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95 Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.614 [INFO][4882] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.632 [INFO][4882] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.194/26] block=192.168.107.192/26 handle="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.632 [INFO][4882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.194/26] handle="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.632 [INFO][4882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:39.678643 containerd[1720]: 2025-09-13 00:13:39.632 [INFO][4882] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.194/26] IPv6=[] ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" HandleID="k8s-pod-network.79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.681198 containerd[1720]: 2025-09-13 00:13:39.635 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf31218c-978b-4717-b2c0-67e187ff41da", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"coredns-7c65d6cfc9-n4vhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fb3315507c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:39.681198 containerd[1720]: 2025-09-13 00:13:39.635 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.194/32] ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.681198 containerd[1720]: 2025-09-13 00:13:39.635 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fb3315507c ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.681198 containerd[1720]: 2025-09-13 00:13:39.643 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.681198 containerd[1720]: 2025-09-13 00:13:39.644 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf31218c-978b-4717-b2c0-67e187ff41da", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95", Pod:"coredns-7c65d6cfc9-n4vhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fb3315507c", MAC:"a6:a3:9d:34:0b:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:39.681198 containerd[1720]: 2025-09-13 00:13:39.671 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95" Namespace="kube-system" Pod="coredns-7c65d6cfc9-n4vhw" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:39.734692 containerd[1720]: time="2025-09-13T00:13:39.734587834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:39.734888 containerd[1720]: time="2025-09-13T00:13:39.734700136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:39.734888 containerd[1720]: time="2025-09-13T00:13:39.734742236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:39.735086 containerd[1720]: time="2025-09-13T00:13:39.734954240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:39.756439 systemd-networkd[1578]: calice35e9be906: Link UP Sep 13 00:13:39.757506 systemd-networkd[1578]: calice35e9be906: Gained carrier Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.495 [INFO][4866] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.511 [INFO][4866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0 calico-kube-controllers-6c967b48b- calico-system 878e2756-bf9e-485f-b59b-c9371e461113 916 0 2025-09-13 00:13:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c967b48b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 calico-kube-controllers-6c967b48b-ks4c2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calice35e9be906 [] [] }} ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.511 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.576 [INFO][4889] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" HandleID="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.576 [INFO][4889] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" HandleID="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-78cb87e672", "pod":"calico-kube-controllers-6c967b48b-ks4c2", "timestamp":"2025-09-13 00:13:39.576061008 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.576 [INFO][4889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.632 [INFO][4889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.632 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.684 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.689 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.701 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.706 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.710 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.710 [INFO][4889] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.713 [INFO][4889] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.726 [INFO][4889] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.746 [INFO][4889] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.195/26] block=192.168.107.192/26 handle="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.746 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.195/26] handle="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.746 [INFO][4889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:39.785470 containerd[1720]: 2025-09-13 00:13:39.746 [INFO][4889] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.195/26] IPv6=[] ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" HandleID="k8s-pod-network.c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.786137 containerd[1720]: 2025-09-13 00:13:39.750 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0", GenerateName:"calico-kube-controllers-6c967b48b-", Namespace:"calico-system", SelfLink:"", UID:"878e2756-bf9e-485f-b59b-c9371e461113", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c967b48b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"calico-kube-controllers-6c967b48b-ks4c2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice35e9be906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:39.786137 containerd[1720]: 2025-09-13 00:13:39.752 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.195/32] ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.786137 containerd[1720]: 2025-09-13 00:13:39.752 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice35e9be906 ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.786137 containerd[1720]: 2025-09-13 00:13:39.756 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.786137 containerd[1720]: 2025-09-13 00:13:39.759 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0", GenerateName:"calico-kube-controllers-6c967b48b-", Namespace:"calico-system", SelfLink:"", UID:"878e2756-bf9e-485f-b59b-c9371e461113", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c967b48b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c", Pod:"calico-kube-controllers-6c967b48b-ks4c2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice35e9be906", MAC:"ba:62:f9:b3:72:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:39.786137 containerd[1720]: 2025-09-13 00:13:39.777 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c" Namespace="calico-system" Pod="calico-kube-controllers-6c967b48b-ks4c2" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:39.789180 systemd[1]: Started cri-containerd-79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95.scope - libcontainer container 79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95. Sep 13 00:13:39.829157 containerd[1720]: time="2025-09-13T00:13:39.828693193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:39.829157 containerd[1720]: time="2025-09-13T00:13:39.828796794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:39.829157 containerd[1720]: time="2025-09-13T00:13:39.828831295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:39.829157 containerd[1720]: time="2025-09-13T00:13:39.828944097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:39.861779 containerd[1720]: time="2025-09-13T00:13:39.861286432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n4vhw,Uid:cf31218c-978b-4717-b2c0-67e187ff41da,Namespace:kube-system,Attempt:1,} returns sandbox id \"79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95\"" Sep 13 00:13:39.869774 systemd[1]: Started cri-containerd-c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c.scope - libcontainer container c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c. Sep 13 00:13:39.871115 containerd[1720]: time="2025-09-13T00:13:39.870042577Z" level=info msg="CreateContainer within sandbox \"79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:13:39.919549 containerd[1720]: time="2025-09-13T00:13:39.917389662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c967b48b-ks4c2,Uid:878e2756-bf9e-485f-b59b-c9371e461113,Namespace:calico-system,Attempt:1,} returns sandbox id \"c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c\"" Sep 13 00:13:39.924815 containerd[1720]: time="2025-09-13T00:13:39.924768284Z" level=info msg="CreateContainer within sandbox \"79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d7fc5400daec09adae0b77a076187dca0c2d495fb287b66fc399ff97f289179\"" Sep 13 00:13:39.925386 containerd[1720]: time="2025-09-13T00:13:39.925323393Z" level=info msg="StartContainer for \"4d7fc5400daec09adae0b77a076187dca0c2d495fb287b66fc399ff97f289179\"" Sep 13 00:13:39.956981 systemd[1]: Started cri-containerd-4d7fc5400daec09adae0b77a076187dca0c2d495fb287b66fc399ff97f289179.scope - libcontainer container 4d7fc5400daec09adae0b77a076187dca0c2d495fb287b66fc399ff97f289179. Sep 13 00:13:40.007688 containerd[1720]: time="2025-09-13T00:13:40.007630956Z" level=info msg="StartContainer for \"4d7fc5400daec09adae0b77a076187dca0c2d495fb287b66fc399ff97f289179\" returns successfully" Sep 13 00:13:40.194079 containerd[1720]: time="2025-09-13T00:13:40.193848041Z" level=info msg="StopPodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\"" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.250 [INFO][5047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.250 [INFO][5047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" iface="eth0" netns="/var/run/netns/cni-cba44862-46e2-8989-2392-c167e60a701f" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.250 [INFO][5047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" iface="eth0" netns="/var/run/netns/cni-cba44862-46e2-8989-2392-c167e60a701f" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.251 [INFO][5047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" iface="eth0" netns="/var/run/netns/cni-cba44862-46e2-8989-2392-c167e60a701f" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.251 [INFO][5047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.251 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.277 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.277 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.277 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.286 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.286 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.288 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:40.290376 containerd[1720]: 2025-09-13 00:13:40.289 [INFO][5047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:40.290992 containerd[1720]: time="2025-09-13T00:13:40.290551743Z" level=info msg="TearDown network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" successfully" Sep 13 00:13:40.290992 containerd[1720]: time="2025-09-13T00:13:40.290585543Z" level=info msg="StopPodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" returns successfully" Sep 13 00:13:40.291357 containerd[1720]: time="2025-09-13T00:13:40.291319355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8dh9r,Uid:e3ad624e-065e-4802-ad9c-31cc5acb09b3,Namespace:calico-system,Attempt:1,}" Sep 13 00:13:40.338260 systemd[1]: run-netns-cni\x2dcba44862\x2d46e2\x2d8989\x2d2392\x2dc167e60a701f.mount: Deactivated successfully. Sep 13 00:13:40.510629 kubelet[3252]: I0913 00:13:40.510365 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n4vhw" podStartSLOduration=40.510337783 podStartE2EDuration="40.510337783s" podCreationTimestamp="2025-09-13 00:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:13:40.482180417 +0000 UTC m=+45.384211935" watchObservedRunningTime="2025-09-13 00:13:40.510337783 +0000 UTC m=+45.412369201" Sep 13 00:13:40.542620 systemd-networkd[1578]: cali3b4196a06d8: Link UP Sep 13 00:13:40.542927 systemd-networkd[1578]: cali3b4196a06d8: Gained carrier Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.366 [INFO][5062] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.375 [INFO][5062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0 goldmane-7988f88666- calico-system e3ad624e-065e-4802-ad9c-31cc5acb09b3 929 0 2025-09-13 00:13:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 goldmane-7988f88666-8dh9r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3b4196a06d8 [] [] }} ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.375 [INFO][5062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.423 [INFO][5073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" HandleID="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.423 [INFO][5073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" HandleID="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-78cb87e672", "pod":"goldmane-7988f88666-8dh9r", "timestamp":"2025-09-13 00:13:40.42321114 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.423 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.423 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.423 [INFO][5073] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.431 [INFO][5073] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.444 [INFO][5073] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.460 [INFO][5073] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.465 [INFO][5073] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.471 [INFO][5073] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.471 [INFO][5073] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.479 [INFO][5073] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.495 [INFO][5073] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.520 [INFO][5073] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.196/26] block=192.168.107.192/26 handle="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.521 [INFO][5073] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.196/26] handle="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.521 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:40.575398 containerd[1720]: 2025-09-13 00:13:40.521 [INFO][5073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.196/26] IPv6=[] ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" HandleID="k8s-pod-network.49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.578452 containerd[1720]: 2025-09-13 00:13:40.528 [INFO][5062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e3ad624e-065e-4802-ad9c-31cc5acb09b3", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"goldmane-7988f88666-8dh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b4196a06d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:40.578452 containerd[1720]: 2025-09-13 00:13:40.528 [INFO][5062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.196/32] ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.578452 containerd[1720]: 2025-09-13 00:13:40.528 [INFO][5062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b4196a06d8 ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.578452 containerd[1720]: 2025-09-13 00:13:40.541 [INFO][5062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.578452 containerd[1720]: 2025-09-13 00:13:40.544 [INFO][5062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e3ad624e-065e-4802-ad9c-31cc5acb09b3", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e", Pod:"goldmane-7988f88666-8dh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b4196a06d8", MAC:"ee:e2:a0:a3:60:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:40.578452 containerd[1720]: 2025-09-13 00:13:40.572 [INFO][5062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e" Namespace="calico-system" Pod="goldmane-7988f88666-8dh9r" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:40.795121 containerd[1720]: time="2025-09-13T00:13:40.794744594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:40.795121 containerd[1720]: time="2025-09-13T00:13:40.794833196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:40.795121 containerd[1720]: time="2025-09-13T00:13:40.794890897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:40.795121 containerd[1720]: time="2025-09-13T00:13:40.795011999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:40.837217 systemd[1]: Started cri-containerd-49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e.scope - libcontainer container 49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e. Sep 13 00:13:41.027074 containerd[1720]: time="2025-09-13T00:13:41.027025542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8dh9r,Uid:e3ad624e-065e-4802-ad9c-31cc5acb09b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e\"" Sep 13 00:13:41.195840 containerd[1720]: time="2025-09-13T00:13:41.195208127Z" level=info msg="StopPodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\"" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.268 [INFO][5169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.268 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" iface="eth0" netns="/var/run/netns/cni-d1a8559b-e0fd-9c1b-81e5-6bdecb8e3736" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.269 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" iface="eth0" netns="/var/run/netns/cni-d1a8559b-e0fd-9c1b-81e5-6bdecb8e3736" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.269 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" iface="eth0" netns="/var/run/netns/cni-d1a8559b-e0fd-9c1b-81e5-6bdecb8e3736" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.269 [INFO][5169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.269 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.304 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.304 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.305 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.314 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.314 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.317 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:41.322128 containerd[1720]: 2025-09-13 00:13:41.318 [INFO][5169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:41.323104 containerd[1720]: time="2025-09-13T00:13:41.322294732Z" level=info msg="TearDown network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" successfully" Sep 13 00:13:41.323104 containerd[1720]: time="2025-09-13T00:13:41.322327933Z" level=info msg="StopPodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" returns successfully" Sep 13 00:13:41.323617 containerd[1720]: time="2025-09-13T00:13:41.323586654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-qtqrp,Uid:cec7c8d9-92bc-4074-ba92-6e1593b9ed77,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:13:41.336727 systemd[1]: run-netns-cni\x2dd1a8559b\x2de0fd\x2d9c1b\x2d81e5\x2d6bdecb8e3736.mount: Deactivated successfully. Sep 13 00:13:41.358614 systemd-networkd[1578]: cali2fb3315507c: Gained IPv6LL Sep 13 00:13:41.552141 systemd-networkd[1578]: cali7302a5352fd: Link UP Sep 13 00:13:41.553434 systemd-networkd[1578]: cali7302a5352fd: Gained carrier Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.422 [INFO][5183] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.438 [INFO][5183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0 calico-apiserver-5b854d5954- calico-apiserver cec7c8d9-92bc-4074-ba92-6e1593b9ed77 945 0 2025-09-13 00:13:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b854d5954 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 calico-apiserver-5b854d5954-qtqrp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7302a5352fd [] [] }} ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.438 [INFO][5183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.482 [INFO][5196] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.482 [INFO][5196] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a83c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-78cb87e672", "pod":"calico-apiserver-5b854d5954-qtqrp", "timestamp":"2025-09-13 00:13:41.482772191 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.483 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.483 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.483 [INFO][5196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.492 [INFO][5196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.498 [INFO][5196] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.504 [INFO][5196] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.506 [INFO][5196] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.510 [INFO][5196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.510 [INFO][5196] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.512 [INFO][5196] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75 Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.524 [INFO][5196] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.540 [INFO][5196] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.197/26] block=192.168.107.192/26 handle="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.541 [INFO][5196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.197/26] handle="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.541 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:41.584011 containerd[1720]: 2025-09-13 00:13:41.541 [INFO][5196] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.197/26] IPv6=[] ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.586490 containerd[1720]: 2025-09-13 00:13:41.544 [INFO][5183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec7c8d9-92bc-4074-ba92-6e1593b9ed77", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"calico-apiserver-5b854d5954-qtqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7302a5352fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:41.586490 containerd[1720]: 2025-09-13 00:13:41.544 [INFO][5183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.197/32] ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.586490 containerd[1720]: 2025-09-13 00:13:41.544 [INFO][5183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7302a5352fd ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.586490 containerd[1720]: 2025-09-13 00:13:41.555 [INFO][5183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.586490 containerd[1720]: 2025-09-13 00:13:41.561 [INFO][5183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec7c8d9-92bc-4074-ba92-6e1593b9ed77", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75", Pod:"calico-apiserver-5b854d5954-qtqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7302a5352fd", MAC:"86:cd:47:67:4c:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:41.586490 containerd[1720]: 2025-09-13 00:13:41.580 [INFO][5183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-qtqrp" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:41.638536 containerd[1720]: time="2025-09-13T00:13:41.638363268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:41.639112 containerd[1720]: time="2025-09-13T00:13:41.639033379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:41.639112 containerd[1720]: time="2025-09-13T00:13:41.639061679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:41.640491 containerd[1720]: time="2025-09-13T00:13:41.639668389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:41.678604 systemd-networkd[1578]: calice35e9be906: Gained IPv6LL Sep 13 00:13:41.691812 systemd[1]: Started cri-containerd-624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75.scope - libcontainer container 624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75. Sep 13 00:13:41.812496 containerd[1720]: time="2025-09-13T00:13:41.812314849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-qtqrp,Uid:cec7c8d9-92bc-4074-ba92-6e1593b9ed77,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\"" Sep 13 00:13:41.870667 systemd-networkd[1578]: cali3b4196a06d8: Gained IPv6LL Sep 13 00:13:41.900205 containerd[1720]: time="2025-09-13T00:13:41.900143504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:41.902872 containerd[1720]: time="2025-09-13T00:13:41.902633645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:13:41.911809 containerd[1720]: time="2025-09-13T00:13:41.911725796Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:41.916525 containerd[1720]: time="2025-09-13T00:13:41.916424274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:41.917503 containerd[1720]: time="2025-09-13T00:13:41.917253487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.132914993s" Sep 13 00:13:41.917503 containerd[1720]: time="2025-09-13T00:13:41.917298688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:13:41.920229 containerd[1720]: time="2025-09-13T00:13:41.919703228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:13:41.920578 containerd[1720]: time="2025-09-13T00:13:41.920543342Z" level=info msg="CreateContainer within sandbox \"be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:13:41.956316 containerd[1720]: time="2025-09-13T00:13:41.956266734Z" level=info msg="CreateContainer within sandbox \"be56847f512cf6c6d763ae0803cb0539a2471673bfe4bbe8856542a8ada09d35\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"be6348c042b69fb3b3a6cd2871287d75885e99382629ac42b526d5f33a346b2e\"" Sep 13 00:13:41.957240 containerd[1720]: time="2025-09-13T00:13:41.957201949Z" level=info msg="StartContainer for \"be6348c042b69fb3b3a6cd2871287d75885e99382629ac42b526d5f33a346b2e\"" Sep 13 00:13:41.986675 systemd[1]: Started cri-containerd-be6348c042b69fb3b3a6cd2871287d75885e99382629ac42b526d5f33a346b2e.scope - libcontainer container be6348c042b69fb3b3a6cd2871287d75885e99382629ac42b526d5f33a346b2e. Sep 13 00:13:42.033179 containerd[1720]: time="2025-09-13T00:13:42.033113206Z" level=info msg="StartContainer for \"be6348c042b69fb3b3a6cd2871287d75885e99382629ac42b526d5f33a346b2e\" returns successfully" Sep 13 00:13:42.193824 containerd[1720]: time="2025-09-13T00:13:42.193685866Z" level=info msg="StopPodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\"" Sep 13 00:13:42.194267 containerd[1720]: time="2025-09-13T00:13:42.193685766Z" level=info msg="StopPodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\"" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.265 [INFO][5330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.267 [INFO][5330] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" iface="eth0" netns="/var/run/netns/cni-d6414c85-95c2-9d39-9d6b-30ac2444da05" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.267 [INFO][5330] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" iface="eth0" netns="/var/run/netns/cni-d6414c85-95c2-9d39-9d6b-30ac2444da05" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.267 [INFO][5330] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" iface="eth0" netns="/var/run/netns/cni-d6414c85-95c2-9d39-9d6b-30ac2444da05" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.267 [INFO][5330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.267 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.300 [INFO][5343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.300 [INFO][5343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.300 [INFO][5343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.308 [WARNING][5343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.308 [INFO][5343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.309 [INFO][5343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:42.314434 containerd[1720]: 2025-09-13 00:13:42.312 [INFO][5330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:42.315858 containerd[1720]: time="2025-09-13T00:13:42.314649170Z" level=info msg="TearDown network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" successfully" Sep 13 00:13:42.315858 containerd[1720]: time="2025-09-13T00:13:42.314681670Z" level=info msg="StopPodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" returns successfully" Sep 13 00:13:42.316629 containerd[1720]: time="2025-09-13T00:13:42.316590602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9bxj,Uid:f346ff52-85ec-4854-bcc7-81887dda8d38,Namespace:calico-system,Attempt:1,}" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.277 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.278 [INFO][5331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" iface="eth0" netns="/var/run/netns/cni-727b186f-74e6-1bc5-32fe-36b6ceee67b6" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.278 [INFO][5331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" iface="eth0" netns="/var/run/netns/cni-727b186f-74e6-1bc5-32fe-36b6ceee67b6" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.279 [INFO][5331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" iface="eth0" netns="/var/run/netns/cni-727b186f-74e6-1bc5-32fe-36b6ceee67b6" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.279 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.279 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.316 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.316 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.316 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.323 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.323 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.324 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:42.327655 containerd[1720]: 2025-09-13 00:13:42.326 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:42.328208 containerd[1720]: time="2025-09-13T00:13:42.327773187Z" level=info msg="TearDown network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" successfully" Sep 13 00:13:42.328208 containerd[1720]: time="2025-09-13T00:13:42.327800688Z" level=info msg="StopPodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" returns successfully" Sep 13 00:13:42.328918 containerd[1720]: time="2025-09-13T00:13:42.328886406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pn2sm,Uid:a930d21c-bb14-4796-bdaa-457b525327fa,Namespace:kube-system,Attempt:1,}" Sep 13 00:13:42.337035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225385415.mount: Deactivated successfully. Sep 13 00:13:42.337171 systemd[1]: run-netns-cni\x2d727b186f\x2d74e6\x2d1bc5\x2d32fe\x2d36b6ceee67b6.mount: Deactivated successfully. Sep 13 00:13:42.337233 systemd[1]: run-netns-cni\x2dd6414c85\x2d95c2\x2d9d39\x2d9d6b\x2d30ac2444da05.mount: Deactivated successfully. Sep 13 00:13:42.599242 systemd-networkd[1578]: cali0f88612bd8a: Link UP Sep 13 00:13:42.600983 systemd-networkd[1578]: cali0f88612bd8a: Gained carrier Sep 13 00:13:42.624780 kubelet[3252]: I0913 00:13:42.624511 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-659d785666-5b4fx" podStartSLOduration=2.060067658 podStartE2EDuration="6.624490002s" podCreationTimestamp="2025-09-13 00:13:36 +0000 UTC" firstStartedPulling="2025-09-13 00:13:37.354054464 +0000 UTC m=+42.256085882" lastFinishedPulling="2025-09-13 00:13:41.918476808 +0000 UTC m=+46.820508226" observedRunningTime="2025-09-13 00:13:42.51265725 +0000 UTC m=+47.414688668" watchObservedRunningTime="2025-09-13 00:13:42.624490002 +0000 UTC m=+47.526521520" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.429 [INFO][5357] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.444 [INFO][5357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0 csi-node-driver- calico-system f346ff52-85ec-4854-bcc7-81887dda8d38 956 0 2025-09-13 00:13:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 csi-node-driver-l9bxj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f88612bd8a [] [] }} ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.444 [INFO][5357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.531 [INFO][5381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" HandleID="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.531 [INFO][5381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" HandleID="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003017a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-78cb87e672", "pod":"csi-node-driver-l9bxj", "timestamp":"2025-09-13 00:13:42.531611964 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.531 [INFO][5381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.531 [INFO][5381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.531 [INFO][5381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.545 [INFO][5381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.559 [INFO][5381] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.566 [INFO][5381] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.568 [INFO][5381] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.571 [INFO][5381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.571 [INFO][5381] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.573 [INFO][5381] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436 Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.580 [INFO][5381] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.592 [INFO][5381] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.198/26] block=192.168.107.192/26 handle="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.592 [INFO][5381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.198/26] handle="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.592 [INFO][5381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:42.627781 containerd[1720]: 2025-09-13 00:13:42.592 [INFO][5381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.198/26] IPv6=[] ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" HandleID="k8s-pod-network.15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.628747 containerd[1720]: 2025-09-13 00:13:42.596 [INFO][5357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f346ff52-85ec-4854-bcc7-81887dda8d38", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"csi-node-driver-l9bxj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f88612bd8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:42.628747 containerd[1720]: 2025-09-13 00:13:42.596 [INFO][5357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.198/32] ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.628747 containerd[1720]: 2025-09-13 00:13:42.596 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f88612bd8a ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.628747 containerd[1720]: 2025-09-13 00:13:42.600 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.628747 containerd[1720]: 2025-09-13 00:13:42.601 [INFO][5357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f346ff52-85ec-4854-bcc7-81887dda8d38", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436", Pod:"csi-node-driver-l9bxj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f88612bd8a", MAC:"92:a5:8f:77:81:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:42.628747 containerd[1720]: 2025-09-13 00:13:42.621 [INFO][5357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436" Namespace="calico-system" Pod="csi-node-driver-l9bxj" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:42.661457 containerd[1720]: time="2025-09-13T00:13:42.660203794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:42.661457 containerd[1720]: time="2025-09-13T00:13:42.660284095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:42.661457 containerd[1720]: time="2025-09-13T00:13:42.660307495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:42.661457 containerd[1720]: time="2025-09-13T00:13:42.660424197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:42.700661 systemd[1]: Started cri-containerd-15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436.scope - libcontainer container 15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436. Sep 13 00:13:42.746176 containerd[1720]: time="2025-09-13T00:13:42.746035615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l9bxj,Uid:f346ff52-85ec-4854-bcc7-81887dda8d38,Namespace:calico-system,Attempt:1,} returns sandbox id \"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436\"" Sep 13 00:13:42.753933 systemd-networkd[1578]: cali73bd08ecf07: Link UP Sep 13 00:13:42.754878 systemd-networkd[1578]: cali73bd08ecf07: Gained carrier Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.444 [INFO][5368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.465 [INFO][5368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0 coredns-7c65d6cfc9- kube-system a930d21c-bb14-4796-bdaa-457b525327fa 957 0 2025-09-13 00:13:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 coredns-7c65d6cfc9-pn2sm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73bd08ecf07 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.465 [INFO][5368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.556 [INFO][5388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" HandleID="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.556 [INFO][5388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" HandleID="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-78cb87e672", "pod":"coredns-7c65d6cfc9-pn2sm", "timestamp":"2025-09-13 00:13:42.556264472 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.556 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.592 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.592 [INFO][5388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.647 [INFO][5388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.662 [INFO][5388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.686 [INFO][5388] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.692 [INFO][5388] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.707 [INFO][5388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.707 [INFO][5388] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.709 [INFO][5388] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797 Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.718 [INFO][5388] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.745 [INFO][5388] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.199/26] block=192.168.107.192/26 handle="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.746 [INFO][5388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.199/26] handle="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.746 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:42.776914 containerd[1720]: 2025-09-13 00:13:42.746 [INFO][5388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.199/26] IPv6=[] ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" HandleID="k8s-pod-network.6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.777556 containerd[1720]: 2025-09-13 00:13:42.750 [INFO][5368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a930d21c-bb14-4796-bdaa-457b525327fa", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"coredns-7c65d6cfc9-pn2sm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73bd08ecf07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:42.777556 containerd[1720]: 2025-09-13 00:13:42.750 [INFO][5368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.199/32] ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.777556 containerd[1720]: 2025-09-13 00:13:42.750 [INFO][5368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73bd08ecf07 ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.777556 containerd[1720]: 2025-09-13 00:13:42.754 [INFO][5368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.777556 containerd[1720]: 2025-09-13 00:13:42.755 [INFO][5368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a930d21c-bb14-4796-bdaa-457b525327fa", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797", Pod:"coredns-7c65d6cfc9-pn2sm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73bd08ecf07", MAC:"f2:39:7e:bb:28:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:42.777556 containerd[1720]: 2025-09-13 00:13:42.774 [INFO][5368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pn2sm" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:42.802544 containerd[1720]: time="2025-09-13T00:13:42.801703837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:42.802544 containerd[1720]: time="2025-09-13T00:13:42.801777939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:42.802544 containerd[1720]: time="2025-09-13T00:13:42.801794239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:42.802544 containerd[1720]: time="2025-09-13T00:13:42.801884440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:42.826655 systemd[1]: Started cri-containerd-6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797.scope - libcontainer container 6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797. Sep 13 00:13:42.883727 containerd[1720]: time="2025-09-13T00:13:42.883499592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pn2sm,Uid:a930d21c-bb14-4796-bdaa-457b525327fa,Namespace:kube-system,Attempt:1,} returns sandbox id \"6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797\"" Sep 13 00:13:42.893391 containerd[1720]: time="2025-09-13T00:13:42.893350155Z" level=info msg="CreateContainer within sandbox \"6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:13:42.933314 containerd[1720]: time="2025-09-13T00:13:42.933262616Z" level=info msg="CreateContainer within sandbox \"6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33c2eb66c7eb4766b49ea2a9afec295a05898d372dad792c84764e3a6d9dbe75\"" Sep 13 00:13:42.936369 containerd[1720]: time="2025-09-13T00:13:42.934186132Z" level=info msg="StartContainer for \"33c2eb66c7eb4766b49ea2a9afec295a05898d372dad792c84764e3a6d9dbe75\"" Sep 13 00:13:42.964675 systemd[1]: Started cri-containerd-33c2eb66c7eb4766b49ea2a9afec295a05898d372dad792c84764e3a6d9dbe75.scope - libcontainer container 33c2eb66c7eb4766b49ea2a9afec295a05898d372dad792c84764e3a6d9dbe75. Sep 13 00:13:43.008528 containerd[1720]: time="2025-09-13T00:13:43.008301559Z" level=info msg="StartContainer for \"33c2eb66c7eb4766b49ea2a9afec295a05898d372dad792c84764e3a6d9dbe75\" returns successfully" Sep 13 00:13:43.022942 systemd-networkd[1578]: cali7302a5352fd: Gained IPv6LL Sep 13 00:13:43.554427 kubelet[3252]: I0913 00:13:43.553015 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pn2sm" podStartSLOduration=43.552993482 podStartE2EDuration="43.552993482s" podCreationTimestamp="2025-09-13 00:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:13:43.517186889 +0000 UTC m=+48.419218307" watchObservedRunningTime="2025-09-13 00:13:43.552993482 +0000 UTC m=+48.455024900" Sep 13 00:13:43.662641 systemd-networkd[1578]: cali0f88612bd8a: Gained IPv6LL Sep 13 00:13:44.194060 containerd[1720]: time="2025-09-13T00:13:44.194011299Z" level=info msg="StopPodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\"" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.298 [INFO][5576] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.299 [INFO][5576] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" iface="eth0" netns="/var/run/netns/cni-c9e88f63-ed17-25d7-ec1a-fe8c14fff0f4" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.305 [INFO][5576] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" iface="eth0" netns="/var/run/netns/cni-c9e88f63-ed17-25d7-ec1a-fe8c14fff0f4" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.308 [INFO][5576] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" iface="eth0" netns="/var/run/netns/cni-c9e88f63-ed17-25d7-ec1a-fe8c14fff0f4" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.308 [INFO][5576] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.308 [INFO][5576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.382 [INFO][5583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.386 [INFO][5583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.386 [INFO][5583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.398 [WARNING][5583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.398 [INFO][5583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.410 [INFO][5583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:44.418882 containerd[1720]: 2025-09-13 00:13:44.415 [INFO][5576] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:44.421815 containerd[1720]: time="2025-09-13T00:13:44.421665170Z" level=info msg="TearDown network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" successfully" Sep 13 00:13:44.421815 containerd[1720]: time="2025-09-13T00:13:44.421704871Z" level=info msg="StopPodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" returns successfully" Sep 13 00:13:44.428191 containerd[1720]: time="2025-09-13T00:13:44.428150378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4d7f994-ts7vf,Uid:c5cc334d-ce94-427b-92b7-c5a6c28738bf,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:13:44.429108 systemd[1]: run-netns-cni\x2dc9e88f63\x2ded17\x2d25d7\x2dec1a\x2dfe8c14fff0f4.mount: Deactivated successfully. Sep 13 00:13:44.732240 systemd-networkd[1578]: cali339333db51a: Link UP Sep 13 00:13:44.732572 systemd-networkd[1578]: cali339333db51a: Gained carrier Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.550 [INFO][5591] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.567 [INFO][5591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0 calico-apiserver-fc4d7f994- calico-apiserver c5cc334d-ce94-427b-92b7-c5a6c28738bf 990 0 2025-09-13 00:13:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc4d7f994 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 calico-apiserver-fc4d7f994-ts7vf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali339333db51a [] [] }} ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.567 [INFO][5591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.638 [INFO][5613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" HandleID="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.640 [INFO][5613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" HandleID="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b60b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-78cb87e672", "pod":"calico-apiserver-fc4d7f994-ts7vf", "timestamp":"2025-09-13 00:13:44.638006254 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.640 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.640 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.640 [INFO][5613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.653 [INFO][5613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.661 [INFO][5613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.667 [INFO][5613] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.672 [INFO][5613] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.676 [INFO][5613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.677 [INFO][5613] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.679 [INFO][5613] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332 Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.688 [INFO][5613] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.710 [INFO][5613] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.200/26] block=192.168.107.192/26 handle="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.710 [INFO][5613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.200/26] handle="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.710 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:44.771802 containerd[1720]: 2025-09-13 00:13:44.710 [INFO][5613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.200/26] IPv6=[] ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" HandleID="k8s-pod-network.36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.773144 containerd[1720]: 2025-09-13 00:13:44.715 [INFO][5591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0", GenerateName:"calico-apiserver-fc4d7f994-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5cc334d-ce94-427b-92b7-c5a6c28738bf", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4d7f994", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"calico-apiserver-fc4d7f994-ts7vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali339333db51a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:44.773144 containerd[1720]: 2025-09-13 00:13:44.716 [INFO][5591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.200/32] ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.773144 containerd[1720]: 2025-09-13 00:13:44.716 [INFO][5591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali339333db51a ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.773144 containerd[1720]: 2025-09-13 00:13:44.734 [INFO][5591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.773144 containerd[1720]: 2025-09-13 00:13:44.743 [INFO][5591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0", GenerateName:"calico-apiserver-fc4d7f994-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5cc334d-ce94-427b-92b7-c5a6c28738bf", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4d7f994", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332", Pod:"calico-apiserver-fc4d7f994-ts7vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali339333db51a", MAC:"f6:51:ff:d6:d7:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:44.773144 containerd[1720]: 2025-09-13 00:13:44.766 [INFO][5591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-ts7vf" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:44.814685 systemd-networkd[1578]: cali73bd08ecf07: Gained IPv6LL Sep 13 00:13:44.831383 containerd[1720]: time="2025-09-13T00:13:44.830855148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:44.831383 containerd[1720]: time="2025-09-13T00:13:44.831043851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:44.831383 containerd[1720]: time="2025-09-13T00:13:44.831067552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:44.831921 containerd[1720]: time="2025-09-13T00:13:44.831658761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:44.844822 kubelet[3252]: I0913 00:13:44.844740 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:13:44.880685 systemd[1]: Started cri-containerd-36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332.scope - libcontainer container 36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332. Sep 13 00:13:45.059804 containerd[1720]: time="2025-09-13T00:13:45.059763140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4d7f994-ts7vf,Uid:c5cc334d-ce94-427b-92b7-c5a6c28738bf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332\"" Sep 13 00:13:45.197496 containerd[1720]: time="2025-09-13T00:13:45.195560489Z" level=info msg="StopPodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\"" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.324 [INFO][5686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.325 [INFO][5686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" iface="eth0" netns="/var/run/netns/cni-29958fbe-5ecb-41c0-6e07-0ad7210f2df5" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.325 [INFO][5686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" iface="eth0" netns="/var/run/netns/cni-29958fbe-5ecb-41c0-6e07-0ad7210f2df5" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.325 [INFO][5686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" iface="eth0" netns="/var/run/netns/cni-29958fbe-5ecb-41c0-6e07-0ad7210f2df5" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.325 [INFO][5686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.326 [INFO][5686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.377 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.378 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.378 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.386 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.386 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.388 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:45.392515 containerd[1720]: 2025-09-13 00:13:45.389 [INFO][5686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:45.392515 containerd[1720]: time="2025-09-13T00:13:45.392275147Z" level=info msg="TearDown network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" successfully" Sep 13 00:13:45.392515 containerd[1720]: time="2025-09-13T00:13:45.392309348Z" level=info msg="StopPodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" returns successfully" Sep 13 00:13:45.394844 containerd[1720]: time="2025-09-13T00:13:45.393861074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-dnf7s,Uid:5dc69410-9fe3-44c7-9677-e9d7c8975f57,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:13:45.425914 systemd[1]: run-netns-cni\x2d29958fbe\x2d5ecb\x2d41c0\x2d6e07\x2d0ad7210f2df5.mount: Deactivated successfully. Sep 13 00:13:45.602086 systemd-networkd[1578]: calib1f1c20c6aa: Link UP Sep 13 00:13:45.605611 systemd-networkd[1578]: calib1f1c20c6aa: Gained carrier Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.479 [INFO][5718] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.492 [INFO][5718] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0 calico-apiserver-5b854d5954- calico-apiserver 5dc69410-9fe3-44c7-9677-e9d7c8975f57 1004 0 2025-09-13 00:13:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b854d5954 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 calico-apiserver-5b854d5954-dnf7s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib1f1c20c6aa [] [] }} ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.492 [INFO][5718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.531 [INFO][5731] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.531 [INFO][5731] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-78cb87e672", "pod":"calico-apiserver-5b854d5954-dnf7s", "timestamp":"2025-09-13 00:13:45.531702857 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.531 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.532 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.532 [INFO][5731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.541 [INFO][5731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.548 [INFO][5731] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.553 [INFO][5731] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.556 [INFO][5731] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.559 [INFO][5731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.559 [INFO][5731] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.561 [INFO][5731] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1 Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.570 [INFO][5731] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.587 [INFO][5731] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.201/26] block=192.168.107.192/26 handle="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.587 [INFO][5731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.201/26] handle="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.587 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:45.631700 containerd[1720]: 2025-09-13 00:13:45.587 [INFO][5731] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.201/26] IPv6=[] ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.633751 containerd[1720]: 2025-09-13 00:13:45.589 [INFO][5718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dc69410-9fe3-44c7-9677-e9d7c8975f57", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"calico-apiserver-5b854d5954-dnf7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1f1c20c6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:45.633751 containerd[1720]: 2025-09-13 00:13:45.590 [INFO][5718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.201/32] ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.633751 containerd[1720]: 2025-09-13 00:13:45.590 [INFO][5718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1f1c20c6aa ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.633751 containerd[1720]: 2025-09-13 00:13:45.605 [INFO][5718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.633751 containerd[1720]: 2025-09-13 00:13:45.605 [INFO][5718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dc69410-9fe3-44c7-9677-e9d7c8975f57", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1", Pod:"calico-apiserver-5b854d5954-dnf7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1f1c20c6aa", MAC:"12:9d:1b:8b:58:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:45.633751 containerd[1720]: 2025-09-13 00:13:45.626 [INFO][5718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Namespace="calico-apiserver" Pod="calico-apiserver-5b854d5954-dnf7s" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:45.684672 containerd[1720]: time="2025-09-13T00:13:45.683672674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:13:45.684817 containerd[1720]: time="2025-09-13T00:13:45.683738875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:13:45.684817 containerd[1720]: time="2025-09-13T00:13:45.684612090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:45.688610 containerd[1720]: time="2025-09-13T00:13:45.685529305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:13:45.737682 systemd[1]: Started cri-containerd-d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1.scope - libcontainer container d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1. Sep 13 00:13:45.921965 containerd[1720]: time="2025-09-13T00:13:45.921914420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b854d5954-dnf7s,Uid:5dc69410-9fe3-44c7-9677-e9d7c8975f57,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\"" Sep 13 00:13:46.071237 containerd[1720]: time="2025-09-13T00:13:46.071182712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:46.074362 containerd[1720]: time="2025-09-13T00:13:46.074119861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:13:46.083496 containerd[1720]: time="2025-09-13T00:13:46.081139078Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:46.099359 containerd[1720]: time="2025-09-13T00:13:46.099274381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:46.102319 containerd[1720]: time="2025-09-13T00:13:46.100688904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.180947476s" Sep 13 00:13:46.102319 containerd[1720]: time="2025-09-13T00:13:46.100735805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:13:46.103064 containerd[1720]: time="2025-09-13T00:13:46.103014943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:13:46.115111 containerd[1720]: time="2025-09-13T00:13:46.115053544Z" level=info msg="CreateContainer within sandbox \"c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:13:46.168544 containerd[1720]: time="2025-09-13T00:13:46.168492936Z" level=info msg="CreateContainer within sandbox \"c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6\"" Sep 13 00:13:46.169355 containerd[1720]: time="2025-09-13T00:13:46.169317150Z" level=info msg="StartContainer for \"0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6\"" Sep 13 00:13:46.204185 systemd[1]: Started cri-containerd-0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6.scope - libcontainer container 0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6. Sep 13 00:13:46.246578 kernel: bpftool[5849]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:13:46.275720 containerd[1720]: time="2025-09-13T00:13:46.275679125Z" level=info msg="StartContainer for \"0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6\" returns successfully" Sep 13 00:13:46.549018 kubelet[3252]: I0913 00:13:46.548928 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c967b48b-ks4c2" podStartSLOduration=25.365507042 podStartE2EDuration="31.548904386s" podCreationTimestamp="2025-09-13 00:13:15 +0000 UTC" firstStartedPulling="2025-09-13 00:13:39.919434596 +0000 UTC m=+44.821466114" lastFinishedPulling="2025-09-13 00:13:46.10283204 +0000 UTC m=+51.004863458" observedRunningTime="2025-09-13 00:13:46.548164474 +0000 UTC m=+51.450195992" watchObservedRunningTime="2025-09-13 00:13:46.548904386 +0000 UTC m=+51.450935904" Sep 13 00:13:46.670612 systemd-networkd[1578]: calib1f1c20c6aa: Gained IPv6LL Sep 13 00:13:46.734654 systemd-networkd[1578]: cali339333db51a: Gained IPv6LL Sep 13 00:13:46.755821 systemd-networkd[1578]: vxlan.calico: Link UP Sep 13 00:13:46.755837 systemd-networkd[1578]: vxlan.calico: Gained carrier Sep 13 00:13:48.399353 systemd-networkd[1578]: vxlan.calico: Gained IPv6LL Sep 13 00:13:48.703492 systemd[1]: run-containerd-runc-k8s.io-ae50d1e5cf490afc35e70118264072c3184d79de8f6f1a89f3d7d16f6605d8af-runc.7XyvTk.mount: Deactivated successfully. Sep 13 00:13:48.948663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468016745.mount: Deactivated successfully. Sep 13 00:13:49.679727 containerd[1720]: time="2025-09-13T00:13:49.679670848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:49.682324 containerd[1720]: time="2025-09-13T00:13:49.682151789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:13:49.685481 containerd[1720]: time="2025-09-13T00:13:49.685179740Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:49.689339 containerd[1720]: time="2025-09-13T00:13:49.689294708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:49.690667 containerd[1720]: time="2025-09-13T00:13:49.690257225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.587202581s" Sep 13 00:13:49.690667 containerd[1720]: time="2025-09-13T00:13:49.690300325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:13:49.692351 containerd[1720]: time="2025-09-13T00:13:49.692315859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:13:49.694308 containerd[1720]: time="2025-09-13T00:13:49.694269792Z" level=info msg="CreateContainer within sandbox \"49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:13:49.746031 containerd[1720]: time="2025-09-13T00:13:49.745985955Z" level=info msg="CreateContainer within sandbox \"49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4\"" Sep 13 00:13:49.747911 containerd[1720]: time="2025-09-13T00:13:49.746566465Z" level=info msg="StartContainer for \"1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4\"" Sep 13 00:13:49.783637 systemd[1]: Started cri-containerd-1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4.scope - libcontainer container 1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4. Sep 13 00:13:49.833581 containerd[1720]: time="2025-09-13T00:13:49.833534416Z" level=info msg="StartContainer for \"1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4\" returns successfully" Sep 13 00:13:52.611728 systemd[1]: run-containerd-runc-k8s.io-1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4-runc.WqjjMr.mount: Deactivated successfully. Sep 13 00:13:55.229756 containerd[1720]: time="2025-09-13T00:13:55.229697726Z" level=info msg="StopPodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\"" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.311 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f346ff52-85ec-4854-bcc7-81887dda8d38", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436", Pod:"csi-node-driver-l9bxj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f88612bd8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.311 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.311 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" iface="eth0" netns="" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.311 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.311 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.358 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.358 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.359 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.366 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.366 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.368 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:55.372567 containerd[1720]: 2025-09-13 00:13:55.370 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.373846 containerd[1720]: time="2025-09-13T00:13:55.373279927Z" level=info msg="TearDown network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" successfully" Sep 13 00:13:55.373846 containerd[1720]: time="2025-09-13T00:13:55.373325027Z" level=info msg="StopPodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" returns successfully" Sep 13 00:13:55.374116 containerd[1720]: time="2025-09-13T00:13:55.374087340Z" level=info msg="RemovePodSandbox for \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\"" Sep 13 00:13:55.374204 containerd[1720]: time="2025-09-13T00:13:55.374127741Z" level=info msg="Forcibly stopping sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\"" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.420 [WARNING][6145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f346ff52-85ec-4854-bcc7-81887dda8d38", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436", Pod:"csi-node-driver-l9bxj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f88612bd8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.421 [INFO][6145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.421 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" iface="eth0" netns="" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.421 [INFO][6145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.421 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.452 [INFO][6152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.452 [INFO][6152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.452 [INFO][6152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.459 [WARNING][6152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.459 [INFO][6152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" HandleID="k8s-pod-network.dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Workload="ci--4081.3.5--n--78cb87e672-k8s-csi--node--driver--l9bxj-eth0" Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.461 [INFO][6152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:55.464823 containerd[1720]: 2025-09-13 00:13:55.463 [INFO][6145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b" Sep 13 00:13:55.467519 containerd[1720]: time="2025-09-13T00:13:55.464891058Z" level=info msg="TearDown network for sandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" successfully" Sep 13 00:13:56.183650 containerd[1720]: time="2025-09-13T00:13:56.183591373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:56.183917 containerd[1720]: time="2025-09-13T00:13:56.183688275Z" level=info msg="RemovePodSandbox \"dd2d56b7ef02626c568bab9cad9207867c045409e84e6743ce8547fe939c665b\" returns successfully" Sep 13 00:13:56.184340 containerd[1720]: time="2025-09-13T00:13:56.184310685Z" level=info msg="StopPodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\"" Sep 13 00:13:56.241499 containerd[1720]: time="2025-09-13T00:13:56.241241137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:56.247231 containerd[1720]: time="2025-09-13T00:13:56.247149236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:13:56.251699 containerd[1720]: time="2025-09-13T00:13:56.251516609Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:56.265952 containerd[1720]: time="2025-09-13T00:13:56.265616245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:56.270588 containerd[1720]: time="2025-09-13T00:13:56.270432325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.578079066s" Sep 13 00:13:56.270960 containerd[1720]: time="2025-09-13T00:13:56.270934733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:13:56.275501 containerd[1720]: time="2025-09-13T00:13:56.274514693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:13:56.276707 containerd[1720]: time="2025-09-13T00:13:56.276676929Z" level=info msg="CreateContainer within sandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:13:56.338953 containerd[1720]: time="2025-09-13T00:13:56.338906870Z" level=info msg="CreateContainer within sandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\"" Sep 13 00:13:56.341293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3171004492.mount: Deactivated successfully. Sep 13 00:13:56.344953 containerd[1720]: time="2025-09-13T00:13:56.344454563Z" level=info msg="StartContainer for \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\"" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.250 [WARNING][6169] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.250 [INFO][6169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.251 [INFO][6169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" iface="eth0" netns="" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.251 [INFO][6169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.251 [INFO][6169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.315 [INFO][6177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.317 [INFO][6177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.317 [INFO][6177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.334 [WARNING][6177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.334 [INFO][6177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.340 [INFO][6177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:56.353880 containerd[1720]: 2025-09-13 00:13:56.346 [INFO][6169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.355325 containerd[1720]: time="2025-09-13T00:13:56.353894120Z" level=info msg="TearDown network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" successfully" Sep 13 00:13:56.355325 containerd[1720]: time="2025-09-13T00:13:56.353924821Z" level=info msg="StopPodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" returns successfully" Sep 13 00:13:56.355325 containerd[1720]: time="2025-09-13T00:13:56.354389329Z" level=info msg="RemovePodSandbox for \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\"" Sep 13 00:13:56.355325 containerd[1720]: time="2025-09-13T00:13:56.354421329Z" level=info msg="Forcibly stopping sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\"" Sep 13 00:13:56.410700 systemd[1]: Started cri-containerd-59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665.scope - libcontainer container 59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665. Sep 13 00:13:56.495406 containerd[1720]: time="2025-09-13T00:13:56.494997779Z" level=info msg="StartContainer for \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\" returns successfully" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.472 [WARNING][6203] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.473 [INFO][6203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.473 [INFO][6203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" iface="eth0" netns="" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.473 [INFO][6203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.473 [INFO][6203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.530 [INFO][6228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.530 [INFO][6228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.530 [INFO][6228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.539 [WARNING][6228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.539 [INFO][6228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" HandleID="k8s-pod-network.22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Workload="ci--4081.3.5--n--78cb87e672-k8s-whisker--7d4cc886d7--krggt-eth0" Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.541 [INFO][6228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:56.545705 containerd[1720]: 2025-09-13 00:13:56.542 [INFO][6203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f" Sep 13 00:13:56.546288 containerd[1720]: time="2025-09-13T00:13:56.545751028Z" level=info msg="TearDown network for sandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" successfully" Sep 13 00:13:56.557791 containerd[1720]: time="2025-09-13T00:13:56.557734528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:56.558309 containerd[1720]: time="2025-09-13T00:13:56.557824130Z" level=info msg="RemovePodSandbox \"22ef26423a901e1509560216519b7a96896bef761571374e9fd3ce19d2e0bb0f\" returns successfully" Sep 13 00:13:56.559752 containerd[1720]: time="2025-09-13T00:13:56.559679861Z" level=info msg="StopPodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\"" Sep 13 00:13:56.609784 kubelet[3252]: I0913 00:13:56.609699 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-8dh9r" podStartSLOduration=32.947807936 podStartE2EDuration="41.609674096s" podCreationTimestamp="2025-09-13 00:13:15 +0000 UTC" firstStartedPulling="2025-09-13 00:13:41.029690486 +0000 UTC m=+45.931721904" lastFinishedPulling="2025-09-13 00:13:49.691556546 +0000 UTC m=+54.593588064" observedRunningTime="2025-09-13 00:13:50.57659172 +0000 UTC m=+55.478623138" watchObservedRunningTime="2025-09-13 00:13:56.609674096 +0000 UTC m=+61.511705514" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.697 [WARNING][6245] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a930d21c-bb14-4796-bdaa-457b525327fa", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797", Pod:"coredns-7c65d6cfc9-pn2sm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73bd08ecf07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.698 [INFO][6245] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.698 [INFO][6245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" iface="eth0" netns="" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.699 [INFO][6245] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.699 [INFO][6245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.739 [INFO][6259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.739 [INFO][6259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.739 [INFO][6259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.751 [WARNING][6259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.751 [INFO][6259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.755 [INFO][6259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:56.760770 containerd[1720]: 2025-09-13 00:13:56.758 [INFO][6245] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.760770 containerd[1720]: time="2025-09-13T00:13:56.759939209Z" level=info msg="TearDown network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" successfully" Sep 13 00:13:56.760770 containerd[1720]: time="2025-09-13T00:13:56.759970609Z" level=info msg="StopPodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" returns successfully" Sep 13 00:13:56.764236 containerd[1720]: time="2025-09-13T00:13:56.763581269Z" level=info msg="RemovePodSandbox for \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\"" Sep 13 00:13:56.764236 containerd[1720]: time="2025-09-13T00:13:56.763623370Z" level=info msg="Forcibly stopping sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\"" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.821 [WARNING][6274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a930d21c-bb14-4796-bdaa-457b525327fa", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"6bd233a5382ae8eb7650c3645730b5983a4b481e3801ea9ca05c673cfa94a797", Pod:"coredns-7c65d6cfc9-pn2sm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73bd08ecf07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.822 [INFO][6274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.822 [INFO][6274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" iface="eth0" netns="" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.822 [INFO][6274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.822 [INFO][6274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.858 [INFO][6281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.859 [INFO][6281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.859 [INFO][6281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.885 [WARNING][6281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.885 [INFO][6281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" HandleID="k8s-pod-network.2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--pn2sm-eth0" Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.890 [INFO][6281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:56.893579 containerd[1720]: 2025-09-13 00:13:56.891 [INFO][6274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f" Sep 13 00:13:56.895372 containerd[1720]: time="2025-09-13T00:13:56.893624143Z" level=info msg="TearDown network for sandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" successfully" Sep 13 00:13:56.914243 containerd[1720]: time="2025-09-13T00:13:56.914186087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:56.914605 containerd[1720]: time="2025-09-13T00:13:56.914310589Z" level=info msg="RemovePodSandbox \"2dbf5d726a01d64009ee71c7c30f13cd03444dd21816a849d31b655835d1779f\" returns successfully" Sep 13 00:13:56.914914 containerd[1720]: time="2025-09-13T00:13:56.914860498Z" level=info msg="StopPodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\"" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:56.983 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dc69410-9fe3-44c7-9677-e9d7c8975f57", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1", Pod:"calico-apiserver-5b854d5954-dnf7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1f1c20c6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:56.983 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:56.983 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" iface="eth0" netns="" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:56.983 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:56.983 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.016 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.017 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.017 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.046 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.046 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.050 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:57.055362 containerd[1720]: 2025-09-13 00:13:57.053 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.056100 containerd[1720]: time="2025-09-13T00:13:57.055432949Z" level=info msg="TearDown network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" successfully" Sep 13 00:13:57.056100 containerd[1720]: time="2025-09-13T00:13:57.055492650Z" level=info msg="StopPodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" returns successfully" Sep 13 00:13:57.056100 containerd[1720]: time="2025-09-13T00:13:57.056040359Z" level=info msg="RemovePodSandbox for \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\"" Sep 13 00:13:57.056100 containerd[1720]: time="2025-09-13T00:13:57.056078359Z" level=info msg="Forcibly stopping sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\"" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.129 [WARNING][6316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"5dc69410-9fe3-44c7-9677-e9d7c8975f57", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1", Pod:"calico-apiserver-5b854d5954-dnf7s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1f1c20c6aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.130 [INFO][6316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.130 [INFO][6316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" iface="eth0" netns="" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.130 [INFO][6316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.130 [INFO][6316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.162 [INFO][6323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.163 [INFO][6323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.163 [INFO][6323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.175 [WARNING][6323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.175 [INFO][6323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" HandleID="k8s-pod-network.0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.178 [INFO][6323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:57.182909 containerd[1720]: 2025-09-13 00:13:57.180 [INFO][6316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466" Sep 13 00:13:57.182909 containerd[1720]: time="2025-09-13T00:13:57.181908563Z" level=info msg="TearDown network for sandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" successfully" Sep 13 00:13:57.207197 containerd[1720]: time="2025-09-13T00:13:57.207144685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:57.207815 containerd[1720]: time="2025-09-13T00:13:57.207449190Z" level=info msg="RemovePodSandbox \"0690a874ac2091b22b4c41f11fbed162e4f2731393f3d3fb9c9c16d2d19f9466\" returns successfully" Sep 13 00:13:57.208358 containerd[1720]: time="2025-09-13T00:13:57.208299304Z" level=info msg="StopPodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\"" Sep 13 00:13:57.326992 systemd[1]: run-containerd-runc-k8s.io-0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6-runc.ju61bX.mount: Deactivated successfully. Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.286 [WARNING][6337] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf31218c-978b-4717-b2c0-67e187ff41da", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95", Pod:"coredns-7c65d6cfc9-n4vhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fb3315507c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.287 [INFO][6337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.287 [INFO][6337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" iface="eth0" netns="" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.287 [INFO][6337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.288 [INFO][6337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.333 [INFO][6359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.333 [INFO][6359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.333 [INFO][6359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.343 [WARNING][6359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.343 [INFO][6359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.345 [INFO][6359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:57.347850 containerd[1720]: 2025-09-13 00:13:57.346 [INFO][6337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.349081 containerd[1720]: time="2025-09-13T00:13:57.347932339Z" level=info msg="TearDown network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" successfully" Sep 13 00:13:57.349081 containerd[1720]: time="2025-09-13T00:13:57.347976539Z" level=info msg="StopPodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" returns successfully" Sep 13 00:13:57.349714 containerd[1720]: time="2025-09-13T00:13:57.349249961Z" level=info msg="RemovePodSandbox for \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\"" Sep 13 00:13:57.349714 containerd[1720]: time="2025-09-13T00:13:57.349286461Z" level=info msg="Forcibly stopping sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\"" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.404 [WARNING][6373] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf31218c-978b-4717-b2c0-67e187ff41da", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"79ce3736fc6a2924f6914fca09ac85dd68e0f4929a415c0db0286580c3408c95", Pod:"coredns-7c65d6cfc9-n4vhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fb3315507c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.405 [INFO][6373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.405 [INFO][6373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" iface="eth0" netns="" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.405 [INFO][6373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.405 [INFO][6373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.445 [INFO][6380] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.445 [INFO][6380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.445 [INFO][6380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.454 [WARNING][6380] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.454 [INFO][6380] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" HandleID="k8s-pod-network.df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Workload="ci--4081.3.5--n--78cb87e672-k8s-coredns--7c65d6cfc9--n4vhw-eth0" Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.456 [INFO][6380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:57.463645 containerd[1720]: 2025-09-13 00:13:57.460 [INFO][6373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94" Sep 13 00:13:57.464501 containerd[1720]: time="2025-09-13T00:13:57.464440686Z" level=info msg="TearDown network for sandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" successfully" Sep 13 00:13:57.478086 containerd[1720]: time="2025-09-13T00:13:57.478039514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:57.478499 containerd[1720]: time="2025-09-13T00:13:57.478397420Z" level=info msg="RemovePodSandbox \"df095fe6cd7d86d80dafff6c2bb37209dab3627cf4f73fc2a659ab6be982cf94\" returns successfully" Sep 13 00:13:57.479261 containerd[1720]: time="2025-09-13T00:13:57.479200433Z" level=info msg="StopPodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\"" Sep 13 00:13:57.589159 kubelet[3252]: I0913 00:13:57.587676 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:13:57.629040 systemd[1]: run-containerd-runc-k8s.io-1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4-runc.dUo0h2.mount: Deactivated successfully. Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.629 [WARNING][6397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0", GenerateName:"calico-apiserver-fc4d7f994-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5cc334d-ce94-427b-92b7-c5a6c28738bf", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4d7f994", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332", Pod:"calico-apiserver-fc4d7f994-ts7vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali339333db51a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.630 [INFO][6397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.630 [INFO][6397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" iface="eth0" netns="" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.630 [INFO][6397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.630 [INFO][6397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.707 [INFO][6415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.708 [INFO][6415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.709 [INFO][6415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.723 [WARNING][6415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.723 [INFO][6415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.725 [INFO][6415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:57.735803 containerd[1720]: 2025-09-13 00:13:57.731 [INFO][6397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.735803 containerd[1720]: time="2025-09-13T00:13:57.735734322Z" level=info msg="TearDown network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" successfully" Sep 13 00:13:57.735803 containerd[1720]: time="2025-09-13T00:13:57.735766622Z" level=info msg="StopPodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" returns successfully" Sep 13 00:13:57.738596 containerd[1720]: time="2025-09-13T00:13:57.737880858Z" level=info msg="RemovePodSandbox for \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\"" Sep 13 00:13:57.738596 containerd[1720]: time="2025-09-13T00:13:57.737929358Z" level=info msg="Forcibly stopping sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\"" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.829 [WARNING][6441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0", GenerateName:"calico-apiserver-fc4d7f994-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5cc334d-ce94-427b-92b7-c5a6c28738bf", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4d7f994", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332", Pod:"calico-apiserver-fc4d7f994-ts7vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali339333db51a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.829 [INFO][6441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.829 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" iface="eth0" netns="" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.829 [INFO][6441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.829 [INFO][6441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.869 [INFO][6450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.870 [INFO][6450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.870 [INFO][6450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.878 [WARNING][6450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.878 [INFO][6450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" HandleID="k8s-pod-network.3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--ts7vf-eth0" Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.880 [INFO][6450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:57.887255 containerd[1720]: 2025-09-13 00:13:57.882 [INFO][6441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993" Sep 13 00:13:57.887255 containerd[1720]: time="2025-09-13T00:13:57.885044118Z" level=info msg="TearDown network for sandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" successfully" Sep 13 00:13:57.902298 containerd[1720]: time="2025-09-13T00:13:57.902249306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:57.902562 containerd[1720]: time="2025-09-13T00:13:57.902537010Z" level=info msg="RemovePodSandbox \"3483ed03b122376b604b7acb84467ce65e4587228992fea7403695e2432c3993\" returns successfully" Sep 13 00:13:57.904757 containerd[1720]: time="2025-09-13T00:13:57.904721147Z" level=info msg="StopPodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\"" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.009 [WARNING][6467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec7c8d9-92bc-4074-ba92-6e1593b9ed77", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75", Pod:"calico-apiserver-5b854d5954-qtqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7302a5352fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.010 [INFO][6467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.010 [INFO][6467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" iface="eth0" netns="" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.010 [INFO][6467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.010 [INFO][6467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.108 [INFO][6475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.109 [INFO][6475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.109 [INFO][6475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.118 [WARNING][6475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.118 [INFO][6475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.119 [INFO][6475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:58.123879 containerd[1720]: 2025-09-13 00:13:58.122 [INFO][6467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.125352 containerd[1720]: time="2025-09-13T00:13:58.125199633Z" level=info msg="TearDown network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" successfully" Sep 13 00:13:58.125352 containerd[1720]: time="2025-09-13T00:13:58.125236233Z" level=info msg="StopPodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" returns successfully" Sep 13 00:13:58.126813 containerd[1720]: time="2025-09-13T00:13:58.126404153Z" level=info msg="RemovePodSandbox for \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\"" Sep 13 00:13:58.126813 containerd[1720]: time="2025-09-13T00:13:58.126444154Z" level=info msg="Forcibly stopping sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\"" Sep 13 00:13:58.134400 containerd[1720]: time="2025-09-13T00:13:58.134347586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:58.135835 containerd[1720]: time="2025-09-13T00:13:58.135777910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:13:58.141279 containerd[1720]: time="2025-09-13T00:13:58.140102282Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:58.155923 containerd[1720]: time="2025-09-13T00:13:58.155876346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:58.159456 containerd[1720]: time="2025-09-13T00:13:58.159412705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.884855911s" Sep 13 00:13:58.160538 containerd[1720]: time="2025-09-13T00:13:58.160509123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:13:58.165583 containerd[1720]: time="2025-09-13T00:13:58.165547607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:13:58.169019 containerd[1720]: time="2025-09-13T00:13:58.168978965Z" level=info msg="CreateContainer within sandbox \"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:13:58.259022 containerd[1720]: time="2025-09-13T00:13:58.258967969Z" level=info msg="CreateContainer within sandbox \"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"26aa3b1b3b4a51ff0419b521a3ef76d33654c023d945edc5edfc836e34113fd7\"" Sep 13 00:13:58.260750 containerd[1720]: time="2025-09-13T00:13:58.260716898Z" level=info msg="StartContainer for \"26aa3b1b3b4a51ff0419b521a3ef76d33654c023d945edc5edfc836e34113fd7\"" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.196 [WARNING][6489] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0", GenerateName:"calico-apiserver-5b854d5954-", Namespace:"calico-apiserver", SelfLink:"", UID:"cec7c8d9-92bc-4074-ba92-6e1593b9ed77", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b854d5954", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75", Pod:"calico-apiserver-5b854d5954-qtqrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7302a5352fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.197 [INFO][6489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.197 [INFO][6489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" iface="eth0" netns="" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.197 [INFO][6489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.197 [INFO][6489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.261 [INFO][6496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.261 [INFO][6496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.262 [INFO][6496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.276 [WARNING][6496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.279 [INFO][6496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" HandleID="k8s-pod-network.d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.284 [INFO][6496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:58.296447 containerd[1720]: 2025-09-13 00:13:58.289 [INFO][6489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c" Sep 13 00:13:58.297203 containerd[1720]: time="2025-09-13T00:13:58.296680300Z" level=info msg="TearDown network for sandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" successfully" Sep 13 00:13:58.316575 containerd[1720]: time="2025-09-13T00:13:58.316515531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:58.316745 containerd[1720]: time="2025-09-13T00:13:58.316615933Z" level=info msg="RemovePodSandbox \"d921885ad060577fdf3806b8b34b6e2744f4032c43475fda2294c9e4bdf9e48c\" returns successfully" Sep 13 00:13:58.318516 containerd[1720]: time="2025-09-13T00:13:58.318454564Z" level=info msg="StopPodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\"" Sep 13 00:13:58.336685 systemd[1]: Started cri-containerd-26aa3b1b3b4a51ff0419b521a3ef76d33654c023d945edc5edfc836e34113fd7.scope - libcontainer container 26aa3b1b3b4a51ff0419b521a3ef76d33654c023d945edc5edfc836e34113fd7. Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.382 [WARNING][6530] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0", GenerateName:"calico-kube-controllers-6c967b48b-", Namespace:"calico-system", SelfLink:"", UID:"878e2756-bf9e-485f-b59b-c9371e461113", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c967b48b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c", Pod:"calico-kube-controllers-6c967b48b-ks4c2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice35e9be906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.383 [INFO][6530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.383 [INFO][6530] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" iface="eth0" netns="" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.383 [INFO][6530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.383 [INFO][6530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.430 [INFO][6539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.431 [INFO][6539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.432 [INFO][6539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.439 [WARNING][6539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.439 [INFO][6539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.441 [INFO][6539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:58.449345 containerd[1720]: 2025-09-13 00:13:58.447 [INFO][6530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.453121 containerd[1720]: time="2025-09-13T00:13:58.451840493Z" level=info msg="TearDown network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" successfully" Sep 13 00:13:58.453121 containerd[1720]: time="2025-09-13T00:13:58.451884594Z" level=info msg="StopPodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" returns successfully" Sep 13 00:13:58.453121 containerd[1720]: time="2025-09-13T00:13:58.452441904Z" level=info msg="RemovePodSandbox for \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\"" Sep 13 00:13:58.453121 containerd[1720]: time="2025-09-13T00:13:58.452494404Z" level=info msg="Forcibly stopping sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\"" Sep 13 00:13:58.523075 containerd[1720]: time="2025-09-13T00:13:58.523028784Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:58.526247 containerd[1720]: time="2025-09-13T00:13:58.526191236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:13:58.537575 containerd[1720]: time="2025-09-13T00:13:58.537516126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 371.921318ms" Sep 13 00:13:58.537823 containerd[1720]: time="2025-09-13T00:13:58.537795830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:13:58.540837 containerd[1720]: time="2025-09-13T00:13:58.540680479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:13:58.541848 containerd[1720]: time="2025-09-13T00:13:58.541818998Z" level=info msg="CreateContainer within sandbox \"36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.507 [WARNING][6559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0", GenerateName:"calico-kube-controllers-6c967b48b-", Namespace:"calico-system", SelfLink:"", UID:"878e2756-bf9e-485f-b59b-c9371e461113", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c967b48b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"c5109cafe39d7b68eb656c940593274f5aa87cbda9d5414ea629f1786969c99c", Pod:"calico-kube-controllers-6c967b48b-ks4c2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calice35e9be906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.507 [INFO][6559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.507 [INFO][6559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" iface="eth0" netns="" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.507 [INFO][6559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.507 [INFO][6559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.571 [INFO][6567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.571 [INFO][6567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.571 [INFO][6567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.584 [WARNING][6567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.584 [INFO][6567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" HandleID="k8s-pod-network.983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--kube--controllers--6c967b48b--ks4c2-eth0" Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.587 [INFO][6567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:58.607259 containerd[1720]: 2025-09-13 00:13:58.589 [INFO][6559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4" Sep 13 00:13:58.611656 containerd[1720]: time="2025-09-13T00:13:58.610016638Z" level=info msg="TearDown network for sandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" successfully" Sep 13 00:13:58.632615 containerd[1720]: time="2025-09-13T00:13:58.632567715Z" level=info msg="CreateContainer within sandbox \"36b2c6f3f6ecd4850ce50257c83438bc6b831a83f107ae59d1fc38517e190332\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"019bebc86e582122f24193ce4d2da4ef2b1b7069a4a5bc3371050eff4dad9bdc\"" Sep 13 00:13:58.634009 containerd[1720]: time="2025-09-13T00:13:58.633972538Z" level=info msg="StartContainer for \"26aa3b1b3b4a51ff0419b521a3ef76d33654c023d945edc5edfc836e34113fd7\" returns successfully" Sep 13 00:13:58.634363 containerd[1720]: time="2025-09-13T00:13:58.634336744Z" level=info msg="StartContainer for \"019bebc86e582122f24193ce4d2da4ef2b1b7069a4a5bc3371050eff4dad9bdc\"" Sep 13 00:13:58.639928 containerd[1720]: time="2025-09-13T00:13:58.639880137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:58.640043 containerd[1720]: time="2025-09-13T00:13:58.639959438Z" level=info msg="RemovePodSandbox \"983c76f45bbc896b83f01e0281c2d922aa76e808ff0ca9e3f5642e4bdc06dfa4\" returns successfully" Sep 13 00:13:58.641642 containerd[1720]: time="2025-09-13T00:13:58.641601366Z" level=info msg="StopPodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\"" Sep 13 00:13:58.656546 kubelet[3252]: I0913 00:13:58.656166 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:13:58.731078 systemd[1]: run-containerd-runc-k8s.io-019bebc86e582122f24193ce4d2da4ef2b1b7069a4a5bc3371050eff4dad9bdc-runc.CK0PNV.mount: Deactivated successfully. Sep 13 00:13:58.740643 systemd[1]: Started cri-containerd-019bebc86e582122f24193ce4d2da4ef2b1b7069a4a5bc3371050eff4dad9bdc.scope - libcontainer container 019bebc86e582122f24193ce4d2da4ef2b1b7069a4a5bc3371050eff4dad9bdc. Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.731 [WARNING][6600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e3ad624e-065e-4802-ad9c-31cc5acb09b3", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e", Pod:"goldmane-7988f88666-8dh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b4196a06d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.731 [INFO][6600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.732 [INFO][6600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" iface="eth0" netns="" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.732 [INFO][6600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.732 [INFO][6600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.779 [INFO][6615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.780 [INFO][6615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.780 [INFO][6615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.786 [WARNING][6615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.786 [INFO][6615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.790 [INFO][6615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:58.795946 containerd[1720]: 2025-09-13 00:13:58.793 [INFO][6600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.798986 containerd[1720]: time="2025-09-13T00:13:58.796187650Z" level=info msg="TearDown network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" successfully" Sep 13 00:13:58.798986 containerd[1720]: time="2025-09-13T00:13:58.796222851Z" level=info msg="StopPodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" returns successfully" Sep 13 00:13:58.799903 containerd[1720]: time="2025-09-13T00:13:58.799867112Z" level=info msg="RemovePodSandbox for \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\"" Sep 13 00:13:58.800010 containerd[1720]: time="2025-09-13T00:13:58.799911712Z" level=info msg="Forcibly stopping sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\"" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.869 [WARNING][6636] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"e3ad624e-065e-4802-ad9c-31cc5acb09b3", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 13, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"49601273988714f3bd467edf7d6eb8c0e9cabdf59c4e9314570d9363db3c5f3e", Pod:"goldmane-7988f88666-8dh9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b4196a06d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.870 [INFO][6636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.870 [INFO][6636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" iface="eth0" netns="" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.870 [INFO][6636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.870 [INFO][6636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.920 [INFO][6643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.920 [INFO][6643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.920 [INFO][6643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.935 [WARNING][6643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.935 [INFO][6643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" HandleID="k8s-pod-network.d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Workload="ci--4081.3.5--n--78cb87e672-k8s-goldmane--7988f88666--8dh9r-eth0" Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.938 [INFO][6643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:13:58.941313 containerd[1720]: 2025-09-13 00:13:58.939 [INFO][6636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53" Sep 13 00:13:58.942027 containerd[1720]: time="2025-09-13T00:13:58.941368777Z" level=info msg="TearDown network for sandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" successfully" Sep 13 00:13:58.953699 containerd[1720]: time="2025-09-13T00:13:58.953646883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:13:58.953873 containerd[1720]: time="2025-09-13T00:13:58.953778585Z" level=info msg="RemovePodSandbox \"d433e5a142c7d2508a459cc9ade819280776410edea0057876ff9a872b652e53\" returns successfully" Sep 13 00:13:59.011477 containerd[1720]: time="2025-09-13T00:13:59.010360731Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:13:59.015564 containerd[1720]: time="2025-09-13T00:13:59.015505217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:13:59.018006 containerd[1720]: time="2025-09-13T00:13:59.017962258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 477.243079ms" Sep 13 00:13:59.018166 containerd[1720]: time="2025-09-13T00:13:59.018149461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:13:59.019784 containerd[1720]: time="2025-09-13T00:13:59.019760888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:13:59.021048 containerd[1720]: time="2025-09-13T00:13:59.020930007Z" level=info msg="CreateContainer within sandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:13:59.076488 containerd[1720]: time="2025-09-13T00:13:59.074418902Z" level=info msg="CreateContainer within sandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\"" Sep 13 00:13:59.076488 containerd[1720]: time="2025-09-13T00:13:59.074553404Z" level=info msg="StartContainer for \"019bebc86e582122f24193ce4d2da4ef2b1b7069a4a5bc3371050eff4dad9bdc\" returns successfully" Sep 13 00:13:59.081743 containerd[1720]: time="2025-09-13T00:13:59.078924177Z" level=info msg="StartContainer for \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\"" Sep 13 00:13:59.127683 systemd[1]: Started cri-containerd-ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367.scope - libcontainer container ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367. Sep 13 00:13:59.328800 containerd[1720]: time="2025-09-13T00:13:59.328727053Z" level=info msg="StartContainer for \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\" returns successfully" Sep 13 00:13:59.719591 kubelet[3252]: I0913 00:13:59.718025 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b854d5954-qtqrp" podStartSLOduration=34.260544396 podStartE2EDuration="48.718003261s" podCreationTimestamp="2025-09-13 00:13:11 +0000 UTC" firstStartedPulling="2025-09-13 00:13:41.814890892 +0000 UTC m=+46.716922310" lastFinishedPulling="2025-09-13 00:13:56.272349757 +0000 UTC m=+61.174381175" observedRunningTime="2025-09-13 00:13:56.617867033 +0000 UTC m=+61.519898551" watchObservedRunningTime="2025-09-13 00:13:59.718003261 +0000 UTC m=+64.620034679" Sep 13 00:13:59.766511 kubelet[3252]: I0913 00:13:59.765582 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b854d5954-dnf7s" podStartSLOduration=35.670724839 podStartE2EDuration="48.765556956s" podCreationTimestamp="2025-09-13 00:13:11 +0000 UTC" firstStartedPulling="2025-09-13 00:13:45.924168258 +0000 UTC m=+50.826199676" lastFinishedPulling="2025-09-13 00:13:59.019000275 +0000 UTC m=+63.921031793" observedRunningTime="2025-09-13 00:13:59.720367101 +0000 UTC m=+64.622398619" watchObservedRunningTime="2025-09-13 00:13:59.765556956 +0000 UTC m=+64.667588374" Sep 13 00:14:00.678497 kubelet[3252]: I0913 00:14:00.677585 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:14:00.678497 kubelet[3252]: I0913 00:14:00.678293 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:14:00.726787 containerd[1720]: time="2025-09-13T00:14:00.726743425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:00.739662 containerd[1720]: time="2025-09-13T00:14:00.739575039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:14:00.755640 containerd[1720]: time="2025-09-13T00:14:00.755543306Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:00.764495 containerd[1720]: time="2025-09-13T00:14:00.764171951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:14:00.765431 containerd[1720]: time="2025-09-13T00:14:00.765392571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.745499081s" Sep 13 00:14:00.765603 containerd[1720]: time="2025-09-13T00:14:00.765582474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:14:00.769261 containerd[1720]: time="2025-09-13T00:14:00.769073133Z" level=info msg="CreateContainer within sandbox \"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:14:00.811206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606858024.mount: Deactivated successfully. Sep 13 00:14:00.813229 containerd[1720]: time="2025-09-13T00:14:00.812779163Z" level=info msg="CreateContainer within sandbox \"15af99249e2c79550097b9b68f60af7269bde69a81b3e0c45d33bced1f6a3436\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4017060c27a1a1d847fb149ae0e1a2f1bd3650111f84db3ec4bddceeebfbda80\"" Sep 13 00:14:00.814557 containerd[1720]: time="2025-09-13T00:14:00.814298089Z" level=info msg="StartContainer for \"4017060c27a1a1d847fb149ae0e1a2f1bd3650111f84db3ec4bddceeebfbda80\"" Sep 13 00:14:00.889658 systemd[1]: Started cri-containerd-4017060c27a1a1d847fb149ae0e1a2f1bd3650111f84db3ec4bddceeebfbda80.scope - libcontainer container 4017060c27a1a1d847fb149ae0e1a2f1bd3650111f84db3ec4bddceeebfbda80. Sep 13 00:14:01.048939 containerd[1720]: time="2025-09-13T00:14:01.048890511Z" level=info msg="StartContainer for \"4017060c27a1a1d847fb149ae0e1a2f1bd3650111f84db3ec4bddceeebfbda80\" returns successfully" Sep 13 00:14:01.639906 kubelet[3252]: I0913 00:14:01.639869 3252 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:14:01.640458 kubelet[3252]: I0913 00:14:01.639928 3252 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:14:01.684493 kubelet[3252]: I0913 00:14:01.683977 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:14:01.701915 kubelet[3252]: I0913 00:14:01.701839 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc4d7f994-ts7vf" podStartSLOduration=35.224197741 podStartE2EDuration="48.701818426s" podCreationTimestamp="2025-09-13 00:13:13 +0000 UTC" firstStartedPulling="2025-09-13 00:13:45.062100978 +0000 UTC m=+49.964132396" lastFinishedPulling="2025-09-13 00:13:58.539721663 +0000 UTC m=+63.441753081" observedRunningTime="2025-09-13 00:13:59.767712192 +0000 UTC m=+64.669743610" watchObservedRunningTime="2025-09-13 00:14:01.701818426 +0000 UTC m=+66.603849844" Sep 13 00:14:01.702159 kubelet[3252]: I0913 00:14:01.702109 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l9bxj" podStartSLOduration=28.684601005 podStartE2EDuration="46.702094431s" podCreationTimestamp="2025-09-13 00:13:15 +0000 UTC" firstStartedPulling="2025-09-13 00:13:42.749541073 +0000 UTC m=+47.651572491" lastFinishedPulling="2025-09-13 00:14:00.767034499 +0000 UTC m=+65.669065917" observedRunningTime="2025-09-13 00:14:01.700796209 +0000 UTC m=+66.602827727" watchObservedRunningTime="2025-09-13 00:14:01.702094431 +0000 UTC m=+66.604125849" Sep 13 00:14:04.861316 kubelet[3252]: I0913 00:14:04.861270 3252 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:14:04.862817 containerd[1720]: time="2025-09-13T00:14:04.862763872Z" level=info msg="StopContainer for \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\" with timeout 30 (s)" Sep 13 00:14:04.864659 containerd[1720]: time="2025-09-13T00:14:04.863753989Z" level=info msg="Stop container \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\" with signal terminated" Sep 13 00:14:04.923843 systemd[1]: cri-containerd-ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367.scope: Deactivated successfully. Sep 13 00:14:04.924106 systemd[1]: cri-containerd-ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367.scope: Consumed 1.181s CPU time. Sep 13 00:14:04.957251 systemd[1]: Created slice kubepods-besteffort-podcb8727d8_2180_43ec_9eaa_57f16c314cd8.slice - libcontainer container kubepods-besteffort-podcb8727d8_2180_43ec_9eaa_57f16c314cd8.slice. Sep 13 00:14:04.999097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367-rootfs.mount: Deactivated successfully. Sep 13 00:14:05.086796 kubelet[3252]: I0913 00:14:05.086742 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hc6b\" (UniqueName: \"kubernetes.io/projected/cb8727d8-2180-43ec-9eaa-57f16c314cd8-kube-api-access-7hc6b\") pod \"calico-apiserver-fc4d7f994-rjr7x\" (UID: \"cb8727d8-2180-43ec-9eaa-57f16c314cd8\") " pod="calico-apiserver/calico-apiserver-fc4d7f994-rjr7x" Sep 13 00:14:05.087039 kubelet[3252]: I0913 00:14:05.086821 3252 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cb8727d8-2180-43ec-9eaa-57f16c314cd8-calico-apiserver-certs\") pod \"calico-apiserver-fc4d7f994-rjr7x\" (UID: \"cb8727d8-2180-43ec-9eaa-57f16c314cd8\") " pod="calico-apiserver/calico-apiserver-fc4d7f994-rjr7x" Sep 13 00:14:05.267505 containerd[1720]: time="2025-09-13T00:14:05.267352833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4d7f994-rjr7x,Uid:cb8727d8-2180-43ec-9eaa-57f16c314cd8,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:14:07.417042 containerd[1720]: time="2025-09-13T00:14:07.416813686Z" level=info msg="shim disconnected" id=ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367 namespace=k8s.io Sep 13 00:14:07.417042 containerd[1720]: time="2025-09-13T00:14:07.416896288Z" level=warning msg="cleaning up after shim disconnected" id=ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367 namespace=k8s.io Sep 13 00:14:07.417042 containerd[1720]: time="2025-09-13T00:14:07.416910388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:14:07.460723 containerd[1720]: time="2025-09-13T00:14:07.459834416Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:14:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:14:07.471779 containerd[1720]: time="2025-09-13T00:14:07.471730817Z" level=info msg="StopContainer for \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\" returns successfully" Sep 13 00:14:07.473339 containerd[1720]: time="2025-09-13T00:14:07.473177142Z" level=info msg="StopPodSandbox for \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\"" Sep 13 00:14:07.473339 containerd[1720]: time="2025-09-13T00:14:07.473232143Z" level=info msg="Container to stop \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:14:07.479781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1-shm.mount: Deactivated successfully. Sep 13 00:14:07.505171 systemd[1]: cri-containerd-d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1.scope: Deactivated successfully. Sep 13 00:14:07.559068 containerd[1720]: time="2025-09-13T00:14:07.557788377Z" level=info msg="shim disconnected" id=d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1 namespace=k8s.io Sep 13 00:14:07.558732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1-rootfs.mount: Deactivated successfully. Sep 13 00:14:07.560858 containerd[1720]: time="2025-09-13T00:14:07.559839112Z" level=warning msg="cleaning up after shim disconnected" id=d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1 namespace=k8s.io Sep 13 00:14:07.563349 containerd[1720]: time="2025-09-13T00:14:07.563224469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:14:07.683648 systemd-networkd[1578]: califd7974acd3e: Link UP Sep 13 00:14:07.685504 systemd-networkd[1578]: califd7974acd3e: Gained carrier Sep 13 00:14:07.723649 kubelet[3252]: I0913 00:14:07.723610 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.538 [INFO][6840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0 calico-apiserver-fc4d7f994- calico-apiserver cb8727d8-2180-43ec-9eaa-57f16c314cd8 1140 0 2025-09-13 00:14:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc4d7f994 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-78cb87e672 calico-apiserver-fc4d7f994-rjr7x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd7974acd3e [] [] }} ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.538 [INFO][6840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.599 [INFO][6872] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" HandleID="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.599 [INFO][6872] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" HandleID="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354f80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-78cb87e672", "pod":"calico-apiserver-fc4d7f994-rjr7x", "timestamp":"2025-09-13 00:14:07.599383582 +0000 UTC"}, Hostname:"ci-4081.3.5-n-78cb87e672", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.599 [INFO][6872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.600 [INFO][6872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.600 [INFO][6872] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-78cb87e672' Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.611 [INFO][6872] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.617 [INFO][6872] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.623 [INFO][6872] ipam/ipam.go 511: Trying affinity for 192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.626 [INFO][6872] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.630 [INFO][6872] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.630 [INFO][6872] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.635 [INFO][6872] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98 Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.646 [INFO][6872] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.669 [INFO][6872] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.202/26] block=192.168.107.192/26 handle="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.669 [INFO][6872] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.202/26] handle="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" host="ci-4081.3.5-n-78cb87e672" Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.669 [INFO][6872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:07.729677 containerd[1720]: 2025-09-13 00:14:07.669 [INFO][6872] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.202/26] IPv6=[] ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" HandleID="k8s-pod-network.a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.732722 containerd[1720]: 2025-09-13 00:14:07.673 [INFO][6840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0", GenerateName:"calico-apiserver-fc4d7f994-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb8727d8-2180-43ec-9eaa-57f16c314cd8", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4d7f994", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"", Pod:"calico-apiserver-fc4d7f994-rjr7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd7974acd3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:14:07.732722 containerd[1720]: 2025-09-13 00:14:07.674 [INFO][6840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.202/32] ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.732722 containerd[1720]: 2025-09-13 00:14:07.674 [INFO][6840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd7974acd3e ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.732722 containerd[1720]: 2025-09-13 00:14:07.684 [INFO][6840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.732722 containerd[1720]: 2025-09-13 00:14:07.692 [INFO][6840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0", GenerateName:"calico-apiserver-fc4d7f994-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb8727d8-2180-43ec-9eaa-57f16c314cd8", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 14, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc4d7f994", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-78cb87e672", ContainerID:"a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98", Pod:"calico-apiserver-fc4d7f994-rjr7x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd7974acd3e", MAC:"16:fc:47:5d:4f:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:14:07.732722 containerd[1720]: 2025-09-13 00:14:07.723 [INFO][6840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98" Namespace="calico-apiserver" Pod="calico-apiserver-fc4d7f994-rjr7x" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--fc4d7f994--rjr7x-eth0" Sep 13 00:14:07.780046 systemd-networkd[1578]: calib1f1c20c6aa: Link DOWN Sep 13 00:14:07.780292 systemd-networkd[1578]: calib1f1c20c6aa: Lost carrier Sep 13 00:14:07.805517 containerd[1720]: time="2025-09-13T00:14:07.804415659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:14:07.805696 containerd[1720]: time="2025-09-13T00:14:07.805566279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:14:07.806552 containerd[1720]: time="2025-09-13T00:14:07.806456394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:07.806773 containerd[1720]: time="2025-09-13T00:14:07.806706098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:14:07.849666 systemd[1]: Started cri-containerd-a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98.scope - libcontainer container a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98. Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.774 [INFO][6901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.774 [INFO][6901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" iface="eth0" netns="/var/run/netns/cni-eaa039ce-2127-e35b-3833-04a849c1a0c0" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.776 [INFO][6901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" iface="eth0" netns="/var/run/netns/cni-eaa039ce-2127-e35b-3833-04a849c1a0c0" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.786 [INFO][6901] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" after=10.850785ms iface="eth0" netns="/var/run/netns/cni-eaa039ce-2127-e35b-3833-04a849c1a0c0" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.786 [INFO][6901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.786 [INFO][6901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.857 [INFO][6931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.857 [INFO][6931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.857 [INFO][6931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.933 [INFO][6931] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.933 [INFO][6931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.944 [INFO][6931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:07.949762 containerd[1720]: 2025-09-13 00:14:07.946 [INFO][6901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:07.951886 containerd[1720]: time="2025-09-13T00:14:07.949988828Z" level=info msg="TearDown network for sandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" successfully" Sep 13 00:14:07.951886 containerd[1720]: time="2025-09-13T00:14:07.950037129Z" level=info msg="StopPodSandbox for \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" returns successfully" Sep 13 00:14:08.110991 kubelet[3252]: I0913 00:14:08.110505 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5dc69410-9fe3-44c7-9677-e9d7c8975f57-calico-apiserver-certs\") pod \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\" (UID: \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\") " Sep 13 00:14:08.110991 kubelet[3252]: I0913 00:14:08.110606 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9bm7\" (UniqueName: \"kubernetes.io/projected/5dc69410-9fe3-44c7-9677-e9d7c8975f57-kube-api-access-s9bm7\") pod \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\" (UID: \"5dc69410-9fe3-44c7-9677-e9d7c8975f57\") " Sep 13 00:14:08.113797 kubelet[3252]: I0913 00:14:08.113291 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc69410-9fe3-44c7-9677-e9d7c8975f57-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "5dc69410-9fe3-44c7-9677-e9d7c8975f57" (UID: "5dc69410-9fe3-44c7-9677-e9d7c8975f57"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:14:08.116232 kubelet[3252]: I0913 00:14:08.116115 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc69410-9fe3-44c7-9677-e9d7c8975f57-kube-api-access-s9bm7" (OuterVolumeSpecName: "kube-api-access-s9bm7") pod "5dc69410-9fe3-44c7-9677-e9d7c8975f57" (UID: "5dc69410-9fe3-44c7-9677-e9d7c8975f57"). InnerVolumeSpecName "kube-api-access-s9bm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:14:08.135244 containerd[1720]: time="2025-09-13T00:14:08.135200669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc4d7f994-rjr7x,Uid:cb8727d8-2180-43ec-9eaa-57f16c314cd8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98\"" Sep 13 00:14:08.140246 containerd[1720]: time="2025-09-13T00:14:08.140194454Z" level=info msg="CreateContainer within sandbox \"a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:14:08.206950 containerd[1720]: time="2025-09-13T00:14:08.206808484Z" level=info msg="CreateContainer within sandbox \"a90ccba35253db8e99a39a0ec7d22ec713736800a48321175e6f3d6006bf0f98\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a8b38370c0c7f5a3764ffb7b35d39d0aaed79e53000c946e6439f6e753c74417\"" Sep 13 00:14:08.207775 containerd[1720]: time="2025-09-13T00:14:08.207705899Z" level=info msg="StartContainer for \"a8b38370c0c7f5a3764ffb7b35d39d0aaed79e53000c946e6439f6e753c74417\"" Sep 13 00:14:08.210960 kubelet[3252]: I0913 00:14:08.210922 3252 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9bm7\" (UniqueName: \"kubernetes.io/projected/5dc69410-9fe3-44c7-9677-e9d7c8975f57-kube-api-access-s9bm7\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:14:08.210960 kubelet[3252]: I0913 00:14:08.210962 3252 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5dc69410-9fe3-44c7-9677-e9d7c8975f57-calico-apiserver-certs\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:14:08.250728 systemd[1]: Started cri-containerd-a8b38370c0c7f5a3764ffb7b35d39d0aaed79e53000c946e6439f6e753c74417.scope - libcontainer container a8b38370c0c7f5a3764ffb7b35d39d0aaed79e53000c946e6439f6e753c74417. Sep 13 00:14:08.324167 containerd[1720]: time="2025-09-13T00:14:08.323928970Z" level=info msg="StartContainer for \"a8b38370c0c7f5a3764ffb7b35d39d0aaed79e53000c946e6439f6e753c74417\" returns successfully" Sep 13 00:14:08.438276 systemd[1]: run-netns-cni\x2deaa039ce\x2d2127\x2de35b\x2d3833\x2d04a849c1a0c0.mount: Deactivated successfully. Sep 13 00:14:08.438415 systemd[1]: var-lib-kubelet-pods-5dc69410\x2d9fe3\x2d44c7\x2d9677\x2de9d7c8975f57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds9bm7.mount: Deactivated successfully. Sep 13 00:14:08.438530 systemd[1]: var-lib-kubelet-pods-5dc69410\x2d9fe3\x2d44c7\x2d9677\x2de9d7c8975f57-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:14:08.743593 systemd[1]: Removed slice kubepods-besteffort-pod5dc69410_9fe3_44c7_9677_e9d7c8975f57.slice - libcontainer container kubepods-besteffort-pod5dc69410_9fe3_44c7_9677_e9d7c8975f57.slice. Sep 13 00:14:08.743751 systemd[1]: kubepods-besteffort-pod5dc69410_9fe3_44c7_9677_e9d7c8975f57.slice: Consumed 1.216s CPU time. Sep 13 00:14:08.777499 kubelet[3252]: I0913 00:14:08.776366 3252 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc4d7f994-rjr7x" podStartSLOduration=4.776340742 podStartE2EDuration="4.776340742s" podCreationTimestamp="2025-09-13 00:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:14:08.761818096 +0000 UTC m=+73.663849514" watchObservedRunningTime="2025-09-13 00:14:08.776340742 +0000 UTC m=+73.678372160" Sep 13 00:14:09.070716 systemd-networkd[1578]: califd7974acd3e: Gained IPv6LL Sep 13 00:14:09.197438 kubelet[3252]: I0913 00:14:09.196972 3252 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc69410-9fe3-44c7-9677-e9d7c8975f57" path="/var/lib/kubelet/pods/5dc69410-9fe3-44c7-9677-e9d7c8975f57/volumes" Sep 13 00:14:10.380147 containerd[1720]: time="2025-09-13T00:14:10.379457859Z" level=info msg="StopContainer for \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\" with timeout 30 (s)" Sep 13 00:14:10.380147 containerd[1720]: time="2025-09-13T00:14:10.380002168Z" level=info msg="Stop container \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\" with signal terminated" Sep 13 00:14:10.431900 systemd[1]: cri-containerd-59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665.scope: Deactivated successfully. Sep 13 00:14:10.432219 systemd[1]: cri-containerd-59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665.scope: Consumed 1.851s CPU time. Sep 13 00:14:10.478414 containerd[1720]: time="2025-09-13T00:14:10.475970679Z" level=info msg="shim disconnected" id=59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665 namespace=k8s.io Sep 13 00:14:10.478414 containerd[1720]: time="2025-09-13T00:14:10.477410204Z" level=warning msg="cleaning up after shim disconnected" id=59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665 namespace=k8s.io Sep 13 00:14:10.478414 containerd[1720]: time="2025-09-13T00:14:10.477586007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:14:10.477356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665-rootfs.mount: Deactivated successfully. Sep 13 00:14:10.584244 containerd[1720]: time="2025-09-13T00:14:10.584189496Z" level=info msg="StopContainer for \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\" returns successfully" Sep 13 00:14:10.587114 containerd[1720]: time="2025-09-13T00:14:10.586920342Z" level=info msg="StopPodSandbox for \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\"" Sep 13 00:14:10.587114 containerd[1720]: time="2025-09-13T00:14:10.586978043Z" level=info msg="Container to stop \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:14:10.596812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75-shm.mount: Deactivated successfully. Sep 13 00:14:10.614608 systemd[1]: cri-containerd-624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75.scope: Deactivated successfully. Sep 13 00:14:10.652052 containerd[1720]: time="2025-09-13T00:14:10.651595828Z" level=info msg="shim disconnected" id=624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75 namespace=k8s.io Sep 13 00:14:10.652052 containerd[1720]: time="2025-09-13T00:14:10.651670429Z" level=warning msg="cleaning up after shim disconnected" id=624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75 namespace=k8s.io Sep 13 00:14:10.652052 containerd[1720]: time="2025-09-13T00:14:10.651681729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:14:10.654066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75-rootfs.mount: Deactivated successfully. Sep 13 00:14:10.738575 kubelet[3252]: I0913 00:14:10.738050 3252 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:10.820174 systemd-networkd[1578]: cali7302a5352fd: Link DOWN Sep 13 00:14:10.820658 systemd-networkd[1578]: cali7302a5352fd: Lost carrier Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.814 [INFO][7092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.814 [INFO][7092] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" iface="eth0" netns="/var/run/netns/cni-b8945e83-f26b-5acb-5bba-6b7415ecd6d2" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.816 [INFO][7092] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" iface="eth0" netns="/var/run/netns/cni-b8945e83-f26b-5acb-5bba-6b7415ecd6d2" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.828 [INFO][7092] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" after=13.68833ms iface="eth0" netns="/var/run/netns/cni-b8945e83-f26b-5acb-5bba-6b7415ecd6d2" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.828 [INFO][7092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.828 [INFO][7092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.886 [INFO][7100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.886 [INFO][7100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.887 [INFO][7100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.969 [INFO][7100] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.969 [INFO][7100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.971 [INFO][7100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:10.976812 containerd[1720]: 2025-09-13 00:14:10.973 [INFO][7092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:10.978953 containerd[1720]: time="2025-09-13T00:14:10.978628218Z" level=info msg="TearDown network for sandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" successfully" Sep 13 00:14:10.978953 containerd[1720]: time="2025-09-13T00:14:10.978665419Z" level=info msg="StopPodSandbox for \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" returns successfully" Sep 13 00:14:10.985285 systemd[1]: run-netns-cni\x2db8945e83\x2df26b\x2d5acb\x2d5bba\x2d6b7415ecd6d2.mount: Deactivated successfully. Sep 13 00:14:11.131895 kubelet[3252]: I0913 00:14:11.131852 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-calico-apiserver-certs\") pod \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\" (UID: \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\") " Sep 13 00:14:11.131895 kubelet[3252]: I0913 00:14:11.131905 3252 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k6b8\" (UniqueName: \"kubernetes.io/projected/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-kube-api-access-5k6b8\") pod \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\" (UID: \"cec7c8d9-92bc-4074-ba92-6e1593b9ed77\") " Sep 13 00:14:11.138177 kubelet[3252]: I0913 00:14:11.138117 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-kube-api-access-5k6b8" (OuterVolumeSpecName: "kube-api-access-5k6b8") pod "cec7c8d9-92bc-4074-ba92-6e1593b9ed77" (UID: "cec7c8d9-92bc-4074-ba92-6e1593b9ed77"). InnerVolumeSpecName "kube-api-access-5k6b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:14:11.138388 kubelet[3252]: I0913 00:14:11.138207 3252 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "cec7c8d9-92bc-4074-ba92-6e1593b9ed77" (UID: "cec7c8d9-92bc-4074-ba92-6e1593b9ed77"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:14:11.142088 systemd[1]: var-lib-kubelet-pods-cec7c8d9\x2d92bc\x2d4074\x2dba92\x2d6e1593b9ed77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5k6b8.mount: Deactivated successfully. Sep 13 00:14:11.203659 systemd[1]: Removed slice kubepods-besteffort-podcec7c8d9_92bc_4074_ba92_6e1593b9ed77.slice - libcontainer container kubepods-besteffort-podcec7c8d9_92bc_4074_ba92_6e1593b9ed77.slice. Sep 13 00:14:11.203813 systemd[1]: kubepods-besteffort-podcec7c8d9_92bc_4074_ba92_6e1593b9ed77.slice: Consumed 1.885s CPU time. Sep 13 00:14:11.232688 kubelet[3252]: I0913 00:14:11.232526 3252 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k6b8\" (UniqueName: \"kubernetes.io/projected/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-kube-api-access-5k6b8\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:14:11.232688 kubelet[3252]: I0913 00:14:11.232564 3252 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cec7c8d9-92bc-4074-ba92-6e1593b9ed77-calico-apiserver-certs\") on node \"ci-4081.3.5-n-78cb87e672\" DevicePath \"\"" Sep 13 00:14:11.476585 systemd[1]: var-lib-kubelet-pods-cec7c8d9\x2d92bc\x2d4074\x2dba92\x2d6e1593b9ed77-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:14:13.197513 kubelet[3252]: I0913 00:14:13.197295 3252 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec7c8d9-92bc-4074-ba92-6e1593b9ed77" path="/var/lib/kubelet/pods/cec7c8d9-92bc-4074-ba92-6e1593b9ed77/volumes" Sep 13 00:14:46.241858 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:56560.service - OpenSSH per-connection server daemon (10.200.16.10:56560). Sep 13 00:14:46.871229 sshd[7197]: Accepted publickey for core from 10.200.16.10 port 56560 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:46.872817 sshd[7197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:46.878113 systemd-logind[1699]: New session 10 of user core. Sep 13 00:14:46.881223 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:14:47.381061 sshd[7197]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:47.384013 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:56560.service: Deactivated successfully. Sep 13 00:14:47.386937 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:14:47.389151 systemd-logind[1699]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:14:47.390800 systemd-logind[1699]: Removed session 10. Sep 13 00:14:52.500578 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:46112.service - OpenSSH per-connection server daemon (10.200.16.10:46112). Sep 13 00:14:53.124841 sshd[7236]: Accepted publickey for core from 10.200.16.10 port 46112 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:53.126772 sshd[7236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:53.131573 systemd-logind[1699]: New session 11 of user core. Sep 13 00:14:53.134715 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:14:53.684272 sshd[7236]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:53.688856 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:46112.service: Deactivated successfully. Sep 13 00:14:53.693210 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:14:53.695578 systemd-logind[1699]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:14:53.698310 systemd-logind[1699]: Removed session 11. Sep 13 00:14:57.238382 systemd[1]: run-containerd-runc-k8s.io-0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6-runc.czoN73.mount: Deactivated successfully. Sep 13 00:14:58.799875 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:46120.service - OpenSSH per-connection server daemon (10.200.16.10:46120). Sep 13 00:14:58.955492 kubelet[3252]: I0913 00:14:58.955430 3252 scope.go:117] "RemoveContainer" containerID="ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367" Sep 13 00:14:58.956938 containerd[1720]: time="2025-09-13T00:14:58.956896069Z" level=info msg="RemoveContainer for \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\"" Sep 13 00:14:58.968712 containerd[1720]: time="2025-09-13T00:14:58.968661464Z" level=info msg="RemoveContainer for \"ce81ab14d3f6d69b0ae903fac75da34da5acaad4692ed432232f141b3bc85367\" returns successfully" Sep 13 00:14:58.968991 kubelet[3252]: I0913 00:14:58.968967 3252 scope.go:117] "RemoveContainer" containerID="59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665" Sep 13 00:14:58.970182 containerd[1720]: time="2025-09-13T00:14:58.970152389Z" level=info msg="RemoveContainer for \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\"" Sep 13 00:14:58.985295 containerd[1720]: time="2025-09-13T00:14:58.985253739Z" level=info msg="RemoveContainer for \"59d740285c9614060f4508e5335687075d261de07246e12a319205dee7c85665\" returns successfully" Sep 13 00:14:58.986760 containerd[1720]: time="2025-09-13T00:14:58.986720963Z" level=info msg="StopPodSandbox for \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\"" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.020 [WARNING][7301] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.020 [INFO][7301] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.020 [INFO][7301] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" iface="eth0" netns="" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.020 [INFO][7301] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.020 [INFO][7301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.044 [INFO][7308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.044 [INFO][7308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.044 [INFO][7308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.050 [WARNING][7308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.050 [INFO][7308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.052 [INFO][7308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:59.054577 containerd[1720]: 2025-09-13 00:14:59.053 [INFO][7301] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.055337 containerd[1720]: time="2025-09-13T00:14:59.055298100Z" level=info msg="TearDown network for sandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" successfully" Sep 13 00:14:59.055337 containerd[1720]: time="2025-09-13T00:14:59.055333201Z" level=info msg="StopPodSandbox for \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" returns successfully" Sep 13 00:14:59.056065 containerd[1720]: time="2025-09-13T00:14:59.056034513Z" level=info msg="RemovePodSandbox for \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\"" Sep 13 00:14:59.056170 containerd[1720]: time="2025-09-13T00:14:59.056073213Z" level=info msg="Forcibly stopping sandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\"" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.090 [WARNING][7322] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.090 [INFO][7322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.090 [INFO][7322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" iface="eth0" netns="" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.090 [INFO][7322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.090 [INFO][7322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.115 [INFO][7329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.116 [INFO][7329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.116 [INFO][7329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.122 [WARNING][7329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.122 [INFO][7329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" HandleID="k8s-pod-network.d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--dnf7s-eth0" Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.123 [INFO][7329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:59.125895 containerd[1720]: 2025-09-13 00:14:59.124 [INFO][7322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1" Sep 13 00:14:59.125895 containerd[1720]: time="2025-09-13T00:14:59.125685368Z" level=info msg="TearDown network for sandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" successfully" Sep 13 00:14:59.135556 containerd[1720]: time="2025-09-13T00:14:59.135505531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:14:59.135882 containerd[1720]: time="2025-09-13T00:14:59.135593532Z" level=info msg="RemovePodSandbox \"d49caad4f28409019c68d727bf6a627de091e3e69dca337a8256143ec6d931c1\" returns successfully" Sep 13 00:14:59.136113 containerd[1720]: time="2025-09-13T00:14:59.136080940Z" level=info msg="StopPodSandbox for \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\"" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.171 [WARNING][7343] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.171 [INFO][7343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.171 [INFO][7343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" iface="eth0" netns="" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.171 [INFO][7343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.171 [INFO][7343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.197 [INFO][7350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.198 [INFO][7350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.198 [INFO][7350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.204 [WARNING][7350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.204 [INFO][7350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.205 [INFO][7350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:59.208030 containerd[1720]: 2025-09-13 00:14:59.206 [INFO][7343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.208636 containerd[1720]: time="2025-09-13T00:14:59.208038933Z" level=info msg="TearDown network for sandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" successfully" Sep 13 00:14:59.208636 containerd[1720]: time="2025-09-13T00:14:59.208064434Z" level=info msg="StopPodSandbox for \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" returns successfully" Sep 13 00:14:59.208636 containerd[1720]: time="2025-09-13T00:14:59.208565842Z" level=info msg="RemovePodSandbox for \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\"" Sep 13 00:14:59.208636 containerd[1720]: time="2025-09-13T00:14:59.208597643Z" level=info msg="Forcibly stopping sandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\"" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.242 [WARNING][7364] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" WorkloadEndpoint="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.242 [INFO][7364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.242 [INFO][7364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" iface="eth0" netns="" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.242 [INFO][7364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.242 [INFO][7364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.263 [INFO][7371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.263 [INFO][7371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.263 [INFO][7371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.268 [WARNING][7371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.268 [INFO][7371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" HandleID="k8s-pod-network.624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Workload="ci--4081.3.5--n--78cb87e672-k8s-calico--apiserver--5b854d5954--qtqrp-eth0" Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.270 [INFO][7371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:14:59.273159 containerd[1720]: 2025-09-13 00:14:59.271 [INFO][7364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75" Sep 13 00:14:59.273159 containerd[1720]: time="2025-09-13T00:14:59.272481602Z" level=info msg="TearDown network for sandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" successfully" Sep 13 00:14:59.287286 containerd[1720]: time="2025-09-13T00:14:59.287233447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:14:59.287493 containerd[1720]: time="2025-09-13T00:14:59.287335348Z" level=info msg="RemovePodSandbox \"624def08f513ec907ec9b8c0684ea7832bdf3a6724d24f051f1f793c5f020f75\" returns successfully" Sep 13 00:14:59.420659 sshd[7291]: Accepted publickey for core from 10.200.16.10 port 46120 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:59.423693 sshd[7291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:59.428389 systemd-logind[1699]: New session 12 of user core. Sep 13 00:14:59.433662 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:14:59.923139 sshd[7291]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:59.927213 systemd-logind[1699]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:14:59.928139 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:46120.service: Deactivated successfully. Sep 13 00:14:59.930333 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:14:59.931421 systemd-logind[1699]: Removed session 12. Sep 13 00:15:00.039775 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:47806.service - OpenSSH per-connection server daemon (10.200.16.10:47806). Sep 13 00:15:00.655821 sshd[7389]: Accepted publickey for core from 10.200.16.10 port 47806 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:00.657392 sshd[7389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:00.662678 systemd-logind[1699]: New session 13 of user core. Sep 13 00:15:00.667669 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:15:00.726434 systemd[1]: run-containerd-runc-k8s.io-0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6-runc.PAprkf.mount: Deactivated successfully. Sep 13 00:15:01.201786 sshd[7389]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:01.204834 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:47806.service: Deactivated successfully. Sep 13 00:15:01.207498 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:15:01.209124 systemd-logind[1699]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:15:01.210921 systemd-logind[1699]: Removed session 13. Sep 13 00:15:01.323314 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:47816.service - OpenSSH per-connection server daemon (10.200.16.10:47816). Sep 13 00:15:01.937869 sshd[7422]: Accepted publickey for core from 10.200.16.10 port 47816 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:01.939539 sshd[7422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:01.944519 systemd-logind[1699]: New session 14 of user core. Sep 13 00:15:01.950674 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:15:02.441872 sshd[7422]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:02.445887 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:47816.service: Deactivated successfully. Sep 13 00:15:02.448212 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:15:02.449222 systemd-logind[1699]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:15:02.450553 systemd-logind[1699]: Removed session 14. Sep 13 00:15:07.552697 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:47830.service - OpenSSH per-connection server daemon (10.200.16.10:47830). Sep 13 00:15:08.060493 update_engine[1702]: I20250913 00:15:08.060422 1702 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:15:08.061220 update_engine[1702]: I20250913 00:15:08.060519 1702 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:15:08.061220 update_engine[1702]: I20250913 00:15:08.060734 1702 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:15:08.061349 update_engine[1702]: I20250913 00:15:08.061314 1702 omaha_request_params.cc:62] Current group set to lts Sep 13 00:15:08.061614 update_engine[1702]: I20250913 00:15:08.061450 1702 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:15:08.061614 update_engine[1702]: I20250913 00:15:08.061481 1702 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:15:08.061614 update_engine[1702]: I20250913 00:15:08.061501 1702 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:15:08.061614 update_engine[1702]: I20250913 00:15:08.061539 1702 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:15:08.061803 update_engine[1702]: I20250913 00:15:08.061619 1702 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:15:08.061803 update_engine[1702]: I20250913 00:15:08.061630 1702 omaha_request_action.cc:272] Request: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: Sep 13 00:15:08.061803 update_engine[1702]: I20250913 00:15:08.061638 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:15:08.062683 locksmithd[1778]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:15:08.063242 update_engine[1702]: I20250913 00:15:08.063207 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:15:08.063605 update_engine[1702]: I20250913 00:15:08.063571 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:15:08.080783 update_engine[1702]: E20250913 00:15:08.080729 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:15:08.080897 update_engine[1702]: I20250913 00:15:08.080829 1702 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:15:08.173881 sshd[7460]: Accepted publickey for core from 10.200.16.10 port 47830 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:08.175503 sshd[7460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:08.180654 systemd-logind[1699]: New session 15 of user core. Sep 13 00:15:08.186672 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:15:08.675642 sshd[7460]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:08.678574 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:47830.service: Deactivated successfully. Sep 13 00:15:08.681376 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:15:08.682850 systemd-logind[1699]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:15:08.683920 systemd-logind[1699]: Removed session 15. Sep 13 00:15:13.786894 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:37842.service - OpenSSH per-connection server daemon (10.200.16.10:37842). Sep 13 00:15:14.408780 sshd[7476]: Accepted publickey for core from 10.200.16.10 port 37842 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:14.409408 sshd[7476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:14.414205 systemd-logind[1699]: New session 16 of user core. Sep 13 00:15:14.419620 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:15:14.922005 sshd[7476]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:14.926170 systemd-logind[1699]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:15:14.926640 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:37842.service: Deactivated successfully. Sep 13 00:15:14.928931 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:15:14.929931 systemd-logind[1699]: Removed session 16. Sep 13 00:15:18.056266 update_engine[1702]: I20250913 00:15:18.056190 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:15:18.056853 update_engine[1702]: I20250913 00:15:18.056505 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:15:18.056853 update_engine[1702]: I20250913 00:15:18.056809 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:15:18.072177 update_engine[1702]: E20250913 00:15:18.072109 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:15:18.072313 update_engine[1702]: I20250913 00:15:18.072199 1702 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:15:20.045809 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:51680.service - OpenSSH per-connection server daemon (10.200.16.10:51680). Sep 13 00:15:20.676570 sshd[7512]: Accepted publickey for core from 10.200.16.10 port 51680 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:20.677569 sshd[7512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:20.685206 systemd-logind[1699]: New session 17 of user core. Sep 13 00:15:20.687630 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:15:21.224588 sshd[7512]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:21.230263 systemd-logind[1699]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:15:21.232928 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:51680.service: Deactivated successfully. Sep 13 00:15:21.236711 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:15:21.238936 systemd-logind[1699]: Removed session 17. Sep 13 00:15:21.348828 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:51692.service - OpenSSH per-connection server daemon (10.200.16.10:51692). Sep 13 00:15:21.980258 sshd[7525]: Accepted publickey for core from 10.200.16.10 port 51692 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:21.981839 sshd[7525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:21.989838 systemd-logind[1699]: New session 18 of user core. Sep 13 00:15:21.995652 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:15:22.584969 sshd[7525]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:22.591729 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:51692.service: Deactivated successfully. Sep 13 00:15:22.595174 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:15:22.596325 systemd-logind[1699]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:15:22.598361 systemd-logind[1699]: Removed session 18. Sep 13 00:15:22.702257 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:51696.service - OpenSSH per-connection server daemon (10.200.16.10:51696). Sep 13 00:15:23.337351 sshd[7536]: Accepted publickey for core from 10.200.16.10 port 51696 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:23.338868 sshd[7536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:23.344083 systemd-logind[1699]: New session 19 of user core. Sep 13 00:15:23.347835 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:15:25.776608 sshd[7536]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:25.780677 systemd-logind[1699]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:15:25.781523 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:51696.service: Deactivated successfully. Sep 13 00:15:25.783850 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:15:25.784907 systemd-logind[1699]: Removed session 19. Sep 13 00:15:25.895335 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:51702.service - OpenSSH per-connection server daemon (10.200.16.10:51702). Sep 13 00:15:26.510312 sshd[7575]: Accepted publickey for core from 10.200.16.10 port 51702 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:26.511895 sshd[7575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:26.516734 systemd-logind[1699]: New session 20 of user core. Sep 13 00:15:26.526647 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:15:27.114938 sshd[7575]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:27.118763 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:51702.service: Deactivated successfully. Sep 13 00:15:27.120761 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:15:27.121516 systemd-logind[1699]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:15:27.122490 systemd-logind[1699]: Removed session 20. Sep 13 00:15:27.234313 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:51718.service - OpenSSH per-connection server daemon (10.200.16.10:51718). Sep 13 00:15:27.544573 systemd[1]: run-containerd-runc-k8s.io-1ada2a54806f29866fad80dcfa174a9b1d51bd5d892d6ab634e18129d7dd8ef4-runc.Bfgr4f.mount: Deactivated successfully. Sep 13 00:15:27.856192 sshd[7594]: Accepted publickey for core from 10.200.16.10 port 51718 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:27.857870 sshd[7594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:27.862957 systemd-logind[1699]: New session 21 of user core. Sep 13 00:15:27.870637 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:15:28.054964 update_engine[1702]: I20250913 00:15:28.054884 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:15:28.055406 update_engine[1702]: I20250913 00:15:28.055178 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:15:28.055564 update_engine[1702]: I20250913 00:15:28.055526 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:15:28.076217 update_engine[1702]: E20250913 00:15:28.076149 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:15:28.076418 update_engine[1702]: I20250913 00:15:28.076390 1702 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:15:28.357548 sshd[7594]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:28.361610 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:51718.service: Deactivated successfully. Sep 13 00:15:28.363621 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:15:28.364567 systemd-logind[1699]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:15:28.365577 systemd-logind[1699]: Removed session 21. Sep 13 00:15:33.472797 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:43802.service - OpenSSH per-connection server daemon (10.200.16.10:43802). Sep 13 00:15:34.089921 sshd[7641]: Accepted publickey for core from 10.200.16.10 port 43802 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:34.091449 sshd[7641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:34.096248 systemd-logind[1699]: New session 22 of user core. Sep 13 00:15:34.102614 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:15:34.593597 sshd[7641]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:34.597690 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:43802.service: Deactivated successfully. Sep 13 00:15:34.599892 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:15:34.601048 systemd-logind[1699]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:15:34.601994 systemd-logind[1699]: Removed session 22. Sep 13 00:15:38.055304 update_engine[1702]: I20250913 00:15:38.055223 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:15:38.061377 update_engine[1702]: I20250913 00:15:38.055549 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:15:38.061377 update_engine[1702]: I20250913 00:15:38.055823 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:15:38.080567 update_engine[1702]: E20250913 00:15:38.080500 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:15:38.080747 update_engine[1702]: I20250913 00:15:38.080597 1702 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:15:38.080747 update_engine[1702]: I20250913 00:15:38.080610 1702 omaha_request_action.cc:617] Omaha request response: Sep 13 00:15:38.080747 update_engine[1702]: E20250913 00:15:38.080705 1702 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:15:38.080747 update_engine[1702]: I20250913 00:15:38.080731 1702 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:15:38.080747 update_engine[1702]: I20250913 00:15:38.080739 1702 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:15:38.080747 update_engine[1702]: I20250913 00:15:38.080746 1702 update_attempter.cc:306] Processing Done. Sep 13 00:15:38.080976 update_engine[1702]: E20250913 00:15:38.080765 1702 update_attempter.cc:619] Update failed. Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080775 1702 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080781 1702 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080789 1702 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080880 1702 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080910 1702 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080920 1702 omaha_request_action.cc:272] Request: Sep 13 00:15:38.080976 update_engine[1702]: Sep 13 00:15:38.080976 update_engine[1702]: Sep 13 00:15:38.080976 update_engine[1702]: Sep 13 00:15:38.080976 update_engine[1702]: Sep 13 00:15:38.080976 update_engine[1702]: Sep 13 00:15:38.080976 update_engine[1702]: Sep 13 00:15:38.080976 update_engine[1702]: I20250913 00:15:38.080928 1702 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:15:38.081498 update_engine[1702]: I20250913 00:15:38.081128 1702 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:15:38.081498 update_engine[1702]: I20250913 00:15:38.081369 1702 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:15:38.081735 locksmithd[1778]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:15:38.096399 update_engine[1702]: E20250913 00:15:38.096334 1702 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096436 1702 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096448 1702 omaha_request_action.cc:617] Omaha request response: Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096460 1702 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096486 1702 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096493 1702 update_attempter.cc:306] Processing Done. Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096503 1702 update_attempter.cc:310] Error event sent. Sep 13 00:15:38.096544 update_engine[1702]: I20250913 00:15:38.096516 1702 update_check_scheduler.cc:74] Next update check in 41m12s Sep 13 00:15:38.097481 locksmithd[1778]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:15:39.708786 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:43812.service - OpenSSH per-connection server daemon (10.200.16.10:43812). Sep 13 00:15:40.326813 sshd[7658]: Accepted publickey for core from 10.200.16.10 port 43812 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:40.328347 sshd[7658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:40.335494 systemd-logind[1699]: New session 23 of user core. Sep 13 00:15:40.339671 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:15:40.825605 sshd[7658]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:40.828774 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:43812.service: Deactivated successfully. Sep 13 00:15:40.831157 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:15:40.833097 systemd-logind[1699]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:15:40.834368 systemd-logind[1699]: Removed session 23. Sep 13 00:15:45.939929 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:60358.service - OpenSSH per-connection server daemon (10.200.16.10:60358). Sep 13 00:15:46.568344 sshd[7675]: Accepted publickey for core from 10.200.16.10 port 60358 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:46.569881 sshd[7675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:46.574940 systemd-logind[1699]: New session 24 of user core. Sep 13 00:15:46.580610 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:15:47.072258 sshd[7675]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:47.076278 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:60358.service: Deactivated successfully. Sep 13 00:15:47.078389 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:15:47.079145 systemd-logind[1699]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:15:47.080362 systemd-logind[1699]: Removed session 24. Sep 13 00:15:52.187775 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:37378.service - OpenSSH per-connection server daemon (10.200.16.10:37378). Sep 13 00:15:52.805313 sshd[7709]: Accepted publickey for core from 10.200.16.10 port 37378 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:52.806907 sshd[7709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:52.811014 systemd-logind[1699]: New session 25 of user core. Sep 13 00:15:52.817617 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:15:53.312548 sshd[7709]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:53.316306 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:37378.service: Deactivated successfully. Sep 13 00:15:53.318405 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:15:53.319170 systemd-logind[1699]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:15:53.320389 systemd-logind[1699]: Removed session 25. Sep 13 00:15:57.241146 systemd[1]: run-containerd-runc-k8s.io-0e3c11ad9a3eb1d95e89048e25cfe385f359a8876e1d1f836d20ec9fbce4efe6-runc.PBpzlt.mount: Deactivated successfully. Sep 13 00:15:58.429062 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:37388.service - OpenSSH per-connection server daemon (10.200.16.10:37388). Sep 13 00:15:59.044276 sshd[7763]: Accepted publickey for core from 10.200.16.10 port 37388 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:15:59.045864 sshd[7763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:59.050652 systemd-logind[1699]: New session 26 of user core. Sep 13 00:15:59.055625 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:15:59.553495 sshd[7763]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:59.556446 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:37388.service: Deactivated successfully. Sep 13 00:15:59.559266 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:15:59.560926 systemd-logind[1699]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:15:59.561984 systemd-logind[1699]: Removed session 26. Sep 13 00:16:04.665757 systemd[1]: Started sshd@24-10.200.8.12:22-10.200.16.10:38386.service - OpenSSH per-connection server daemon (10.200.16.10:38386). Sep 13 00:16:05.286543 sshd[7819]: Accepted publickey for core from 10.200.16.10 port 38386 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:16:05.288067 sshd[7819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:05.293008 systemd-logind[1699]: New session 27 of user core. Sep 13 00:16:05.296638 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:16:05.791400 sshd[7819]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:05.794619 systemd[1]: sshd@24-10.200.8.12:22-10.200.16.10:38386.service: Deactivated successfully. Sep 13 00:16:05.797137 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:16:05.798687 systemd-logind[1699]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:16:05.800165 systemd-logind[1699]: Removed session 27.