Sep 13 00:09:21.086389 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:09:21.086431 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:21.086446 kernel: BIOS-provided physical RAM map: Sep 13 00:09:21.086458 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:09:21.086468 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 13 00:09:21.086480 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 13 00:09:21.086494 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 13 00:09:21.086509 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 13 00:09:21.086521 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 13 00:09:21.086532 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 13 00:09:21.086544 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 13 00:09:21.086555 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 13 00:09:21.086567 kernel: printk: bootconsole [earlyser0] enabled Sep 13 00:09:21.086579 kernel: NX (Execute Disable) protection: active Sep 13 00:09:21.086597 kernel: APIC: Static calls initialized Sep 13 00:09:21.086610 kernel: efi: EFI v2.7 by Microsoft Sep 13 00:09:21.086624 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 13 00:09:21.086637 kernel: SMBIOS 3.1.0 present. Sep 13 00:09:21.086650 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 13 00:09:21.086663 kernel: Hypervisor detected: Microsoft Hyper-V Sep 13 00:09:21.086676 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 13 00:09:21.086689 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Sep 13 00:09:21.086702 kernel: Hyper-V: Nested features: 0x1e0101 Sep 13 00:09:21.086715 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 13 00:09:21.086730 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 13 00:09:21.086744 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:09:21.086757 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:09:21.086771 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 13 00:09:21.086784 kernel: tsc: Detected 2593.905 MHz processor Sep 13 00:09:21.086798 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:09:21.086812 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:09:21.086825 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 13 00:09:21.086839 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:09:21.086855 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:09:21.086868 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 13 00:09:21.086881 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 13 00:09:21.086894 kernel: Using GB pages for direct mapping Sep 13 00:09:21.086908 kernel: Secure boot disabled Sep 13 00:09:21.086921 kernel: ACPI: Early table checksum verification disabled Sep 13 00:09:21.086935 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 13 00:09:21.086955 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.086973 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.086987 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 13 00:09:21.087001 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 13 00:09:21.087016 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087030 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087045 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087063 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087077 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087091 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087106 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087120 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 13 00:09:21.087134 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 13 00:09:21.087148 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 13 00:09:21.087162 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 13 00:09:21.087179 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 13 00:09:21.087193 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 13 00:09:21.087208 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 13 00:09:21.087221 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 13 00:09:21.087236 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 13 00:09:21.087250 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 13 00:09:21.087265 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:09:21.087279 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:09:21.087293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 13 00:09:21.087630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 13 00:09:21.087649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 13 00:09:21.087662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 13 00:09:21.087676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 13 00:09:21.087690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 13 00:09:21.087703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 13 00:09:21.087716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 13 00:09:21.087730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 13 00:09:21.087743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 13 00:09:21.087762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 13 00:09:21.087776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 13 00:09:21.087790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 13 00:09:21.087804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 13 00:09:21.087818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 13 00:09:21.087832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 13 00:09:21.087845 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 13 00:09:21.087857 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 13 00:09:21.087868 kernel: Zone ranges: Sep 13 00:09:21.087883 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:09:21.087896 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:09:21.087910 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:09:21.087924 kernel: Movable zone start for each node Sep 13 00:09:21.087939 kernel: Early memory node ranges Sep 13 00:09:21.087953 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:09:21.087967 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 13 00:09:21.087981 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 13 00:09:21.087992 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:09:21.088008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 13 00:09:21.088021 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:09:21.088035 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:09:21.088046 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 13 00:09:21.088058 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 13 00:09:21.088072 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 13 00:09:21.088084 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:09:21.088096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:09:21.088109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:09:21.088126 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 13 00:09:21.088140 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:09:21.088153 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 13 00:09:21.088166 kernel: Booting paravirtualized kernel on Hyper-V Sep 13 00:09:21.088178 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:09:21.088191 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:09:21.088205 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:09:21.088219 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:09:21.088233 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:09:21.088250 kernel: Hyper-V: PV spinlocks enabled Sep 13 00:09:21.088264 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:09:21.088280 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:21.088296 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:09:21.089230 kernel: random: crng init done Sep 13 00:09:21.089246 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:09:21.089255 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:09:21.089264 kernel: Fallback order for Node 0: 0 Sep 13 00:09:21.089278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 13 00:09:21.089299 kernel: Policy zone: Normal Sep 13 00:09:21.089326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:09:21.089337 kernel: software IO TLB: area num 2. Sep 13 00:09:21.089346 kernel: Memory: 8069608K/8387460K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 317592K reserved, 0K cma-reserved) Sep 13 00:09:21.089357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:09:21.089366 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:09:21.089375 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:09:21.089385 kernel: Dynamic Preempt: voluntary Sep 13 00:09:21.089394 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:09:21.089406 kernel: rcu: RCU event tracing is enabled. Sep 13 00:09:21.089418 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:09:21.089429 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:09:21.089438 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:09:21.089449 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:09:21.089458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:09:21.089471 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:09:21.089481 kernel: Using NULL legacy PIC Sep 13 00:09:21.089491 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 13 00:09:21.089500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:09:21.089511 kernel: Console: colour dummy device 80x25 Sep 13 00:09:21.089519 kernel: printk: console [tty1] enabled Sep 13 00:09:21.089527 kernel: printk: console [ttyS0] enabled Sep 13 00:09:21.089535 kernel: printk: bootconsole [earlyser0] disabled Sep 13 00:09:21.089543 kernel: ACPI: Core revision 20230628 Sep 13 00:09:21.089551 kernel: Failed to register legacy timer interrupt Sep 13 00:09:21.089562 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:09:21.089570 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 13 00:09:21.089578 kernel: Hyper-V: Using IPI hypercalls Sep 13 00:09:21.089586 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 13 00:09:21.089594 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 13 00:09:21.089602 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 13 00:09:21.089610 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 13 00:09:21.089618 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 13 00:09:21.089626 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 13 00:09:21.089636 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Sep 13 00:09:21.089644 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:09:21.089652 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:09:21.089661 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:09:21.089668 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:09:21.089676 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:09:21.089684 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:09:21.089692 kernel: RETBleed: Vulnerable Sep 13 00:09:21.089700 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:09:21.089710 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:21.089718 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:21.089726 kernel: active return thunk: its_return_thunk Sep 13 00:09:21.089734 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:09:21.089742 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:09:21.089750 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:09:21.089758 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:09:21.089766 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:09:21.089774 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:09:21.089782 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:09:21.089790 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:09:21.089802 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 13 00:09:21.089811 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 13 00:09:21.089819 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 13 00:09:21.089827 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 13 00:09:21.089838 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:09:21.089847 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:09:21.089855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:09:21.089866 kernel: landlock: Up and running. Sep 13 00:09:21.089874 kernel: SELinux: Initializing. Sep 13 00:09:21.089882 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.089891 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.089899 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:09:21.089909 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:21.089917 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:21.089926 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:21.089934 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:09:21.089942 kernel: signal: max sigframe size: 3632 Sep 13 00:09:21.089950 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:09:21.089959 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:09:21.089970 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:09:21.089979 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:09:21.089992 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:09:21.090001 kernel: .... node #0, CPUs: #1 Sep 13 00:09:21.090013 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 13 00:09:21.090025 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:09:21.090035 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:09:21.090045 kernel: smpboot: Max logical packages: 1 Sep 13 00:09:21.090056 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 13 00:09:21.090066 kernel: devtmpfs: initialized Sep 13 00:09:21.090079 kernel: x86/mm: Memory block size: 128MB Sep 13 00:09:21.090087 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 13 00:09:21.090099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:09:21.090107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:09:21.090118 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:09:21.090126 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:09:21.090138 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:09:21.090146 kernel: audit: type=2000 audit(1757722159.028:1): state=initialized audit_enabled=0 res=1 Sep 13 00:09:21.090157 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:09:21.090168 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:09:21.090179 kernel: cpuidle: using governor menu Sep 13 00:09:21.090187 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:09:21.090199 kernel: dca service started, version 1.12.1 Sep 13 00:09:21.090207 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 13 00:09:21.090219 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:09:21.090228 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:09:21.090239 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:09:21.090247 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:09:21.090260 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:09:21.090269 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:09:21.090280 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:09:21.090288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:09:21.090300 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:09:21.090321 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:09:21.090331 kernel: ACPI: Interpreter enabled Sep 13 00:09:21.090339 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:09:21.090347 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:09:21.090358 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:09:21.090366 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:09:21.090375 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 13 00:09:21.090383 kernel: iommu: Default domain type: Translated Sep 13 00:09:21.090391 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:09:21.090403 kernel: efivars: Registered efivars operations Sep 13 00:09:21.090411 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:09:21.090422 kernel: PCI: System does not support PCI Sep 13 00:09:21.090430 kernel: vgaarb: loaded Sep 13 00:09:21.090440 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 13 00:09:21.090452 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:09:21.090461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:09:21.090471 kernel: pnp: PnP ACPI init Sep 13 00:09:21.090483 kernel: pnp: PnP ACPI: found 3 devices Sep 13 00:09:21.090493 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:09:21.090502 kernel: NET: Registered PF_INET protocol family Sep 13 00:09:21.090513 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:09:21.090524 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:09:21.090535 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:09:21.090547 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:09:21.090555 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:09:21.090563 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:09:21.090571 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.090580 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.090591 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:09:21.090599 kernel: NET: Registered PF_XDP protocol family Sep 13 00:09:21.090610 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:09:21.090621 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:09:21.090630 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Sep 13 00:09:21.090638 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:09:21.090648 kernel: Initialise system trusted keyrings Sep 13 00:09:21.090657 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:09:21.090665 kernel: Key type asymmetric registered Sep 13 00:09:21.090676 kernel: Asymmetric key parser 'x509' registered Sep 13 00:09:21.090684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:09:21.090696 kernel: io scheduler mq-deadline registered Sep 13 00:09:21.090706 kernel: io scheduler kyber registered Sep 13 00:09:21.090720 kernel: io scheduler bfq registered Sep 13 00:09:21.090728 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:09:21.090740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:09:21.090748 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:09:21.090756 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:09:21.090764 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:09:21.090941 kernel: rtc_cmos 00:02: registered as rtc0 Sep 13 00:09:21.091050 kernel: rtc_cmos 00:02: setting system clock to 2025-09-13T00:09:20 UTC (1757722160) Sep 13 00:09:21.091144 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 13 00:09:21.091157 kernel: intel_pstate: CPU model not supported Sep 13 00:09:21.091167 kernel: efifb: probing for efifb Sep 13 00:09:21.091177 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:09:21.091186 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:09:21.091196 kernel: efifb: scrolling: redraw Sep 13 00:09:21.091205 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:09:21.091219 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:09:21.091228 kernel: fb0: EFI VGA frame buffer device Sep 13 00:09:21.091239 kernel: pstore: Using crash dump compression: deflate Sep 13 00:09:21.091249 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:09:21.091262 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:09:21.091272 kernel: Segment Routing with IPv6 Sep 13 00:09:21.091284 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:09:21.091297 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:09:21.094998 kernel: Key type dns_resolver registered Sep 13 00:09:21.095030 kernel: IPI shorthand broadcast: enabled Sep 13 00:09:21.095053 kernel: sched_clock: Marking stable (834003000, 43139400)->(1065990600, -188848200) Sep 13 00:09:21.095069 kernel: registered taskstats version 1 Sep 13 00:09:21.095084 kernel: Loading compiled-in X.509 certificates Sep 13 00:09:21.095099 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:09:21.095114 kernel: Key type .fscrypt registered Sep 13 00:09:21.095128 kernel: Key type fscrypt-provisioning registered Sep 13 00:09:21.095143 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:09:21.095158 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:09:21.095176 kernel: ima: No architecture policies found Sep 13 00:09:21.095191 kernel: clk: Disabling unused clocks Sep 13 00:09:21.095206 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:09:21.095221 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:09:21.095235 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:09:21.095250 kernel: Run /init as init process Sep 13 00:09:21.095265 kernel: with arguments: Sep 13 00:09:21.095279 kernel: /init Sep 13 00:09:21.095294 kernel: with environment: Sep 13 00:09:21.095325 kernel: HOME=/ Sep 13 00:09:21.095344 kernel: TERM=linux Sep 13 00:09:21.095359 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:09:21.095378 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:21.095397 systemd[1]: Detected virtualization microsoft. Sep 13 00:09:21.095412 systemd[1]: Detected architecture x86-64. Sep 13 00:09:21.095428 systemd[1]: Running in initrd. Sep 13 00:09:21.095443 systemd[1]: No hostname configured, using default hostname. Sep 13 00:09:21.095461 systemd[1]: Hostname set to . Sep 13 00:09:21.095477 systemd[1]: Initializing machine ID from random generator. Sep 13 00:09:21.095492 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:09:21.095507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:21.095523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:21.095540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:09:21.095556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:21.095571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:09:21.095590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:09:21.095608 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:09:21.095624 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:09:21.095639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:21.095655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:21.095670 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:21.095686 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:21.095704 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:21.095720 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:21.095735 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:21.095751 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:21.095766 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:21.095782 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:21.095798 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:21.095813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:21.095828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:21.095847 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:21.095862 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:09:21.095878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:21.095894 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:09:21.095909 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:09:21.095925 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:21.095941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:21.095956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:21.096012 systemd-journald[176]: Collecting audit messages is disabled. Sep 13 00:09:21.096047 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:21.096063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:21.096078 systemd-journald[176]: Journal started Sep 13 00:09:21.096117 systemd-journald[176]: Runtime Journal (/run/log/journal/ad0c1eba6a4b46de9a1c7951a6f56aa0) is 8.0M, max 158.8M, 150.8M free. Sep 13 00:09:21.104341 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:21.105979 systemd-modules-load[177]: Inserted module 'overlay' Sep 13 00:09:21.112445 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:09:21.126506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:21.137542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:21.152906 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:09:21.153478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:21.161330 kernel: Bridge firewalling registered Sep 13 00:09:21.161345 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 13 00:09:21.162956 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:21.172014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:21.182498 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:21.195595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:21.204715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:21.207697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:21.224204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:21.232610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:21.235203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:21.241587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:21.257490 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:09:21.286135 dracut-cmdline[213]: dracut-dracut-053 Sep 13 00:09:21.289917 systemd-resolved[208]: Positive Trust Anchors: Sep 13 00:09:21.289932 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:21.296697 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:21.289986 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:21.296274 systemd-resolved[208]: Defaulting to hostname 'linux'. Sep 13 00:09:21.298620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:21.298757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:21.405345 kernel: SCSI subsystem initialized Sep 13 00:09:21.416335 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:09:21.427341 kernel: iscsi: registered transport (tcp) Sep 13 00:09:21.448675 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:09:21.448769 kernel: QLogic iSCSI HBA Driver Sep 13 00:09:21.485226 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:21.495509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:09:21.522340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:09:21.522414 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:09:21.526740 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:09:21.567360 kernel: raid6: avx512x4 gen() 18344 MB/s Sep 13 00:09:21.586329 kernel: raid6: avx512x2 gen() 18322 MB/s Sep 13 00:09:21.604324 kernel: raid6: avx512x1 gen() 18123 MB/s Sep 13 00:09:21.622336 kernel: raid6: avx2x4 gen() 18246 MB/s Sep 13 00:09:21.641325 kernel: raid6: avx2x2 gen() 18323 MB/s Sep 13 00:09:21.661004 kernel: raid6: avx2x1 gen() 13813 MB/s Sep 13 00:09:21.661055 kernel: raid6: using algorithm avx512x4 gen() 18344 MB/s Sep 13 00:09:21.682354 kernel: raid6: .... xor() 8095 MB/s, rmw enabled Sep 13 00:09:21.682404 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:09:21.705336 kernel: xor: automatically using best checksumming function avx Sep 13 00:09:21.852347 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:09:21.862414 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:21.870459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:21.884228 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 13 00:09:21.888917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:21.901594 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:09:21.914870 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:09:21.942734 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:21.955503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:21.998972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:22.010547 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:09:22.033380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:22.043279 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:22.050601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:22.056387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:22.069591 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:09:22.089337 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:09:22.101343 kernel: hv_vmbus: Vmbus version:5.2 Sep 13 00:09:22.108267 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:22.121333 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:09:22.128011 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:22.132065 kernel: AES CTR mode by8 optimization enabled Sep 13 00:09:22.128138 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:22.139640 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:22.145998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:22.146226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:22.151365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:22.165726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:22.178693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:23.024724 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:09:23.024756 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:09:23.024771 kernel: PTP clock support registered Sep 13 00:09:23.024795 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:09:23.024806 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:09:23.024818 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:09:23.024830 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:09:23.024842 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:09:23.024856 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:09:23.024867 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:09:23.024878 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:09:23.024888 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 00:09:23.024902 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:09:23.024914 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 00:09:23.024926 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:09:22.184895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:22.978480 systemd-resolved[208]: Clock change detected. Flushing caches. Sep 13 00:09:23.041041 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:09:23.046471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:23.057540 kernel: scsi host0: storvsc_host_t Sep 13 00:09:23.057633 kernel: scsi host1: storvsc_host_t Sep 13 00:09:23.064373 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:09:23.066709 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 00:09:23.083345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:23.098072 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:09:23.098373 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:09:23.103875 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:09:23.100535 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:23.122908 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:09:23.123329 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:09:23.127442 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:09:23.127683 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:09:23.123911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:23.134413 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:09:23.139060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:23.142047 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:09:23.190058 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: VF slot 1 added Sep 13 00:09:23.200494 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:09:23.200565 kernel: hv_pci 66abed60-5ed8-4d24-a750-5fb03832036f: PCI VMBus probing: Using version 0x10004 Sep 13 00:09:23.207582 kernel: hv_pci 66abed60-5ed8-4d24-a750-5fb03832036f: PCI host bridge to bus 5ed8:00 Sep 13 00:09:23.207933 kernel: pci_bus 5ed8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 13 00:09:23.210319 kernel: pci_bus 5ed8:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:09:23.215045 kernel: pci 5ed8:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 13 00:09:23.219050 kernel: pci 5ed8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:09:23.223390 kernel: pci 5ed8:00:02.0: enabling Extended Tags Sep 13 00:09:23.233068 kernel: pci 5ed8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5ed8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 13 00:09:23.239963 kernel: pci_bus 5ed8:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:09:23.240371 kernel: pci 5ed8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:09:23.413380 kernel: mlx5_core 5ed8:00:02.0: enabling device (0000 -> 0002) Sep 13 00:09:23.418057 kernel: mlx5_core 5ed8:00:02.0: firmware version: 14.30.5000 Sep 13 00:09:23.648043 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: VF registering: eth1 Sep 13 00:09:23.650260 kernel: mlx5_core 5ed8:00:02.0 eth1: joined to eth0 Sep 13 00:09:23.655606 kernel: mlx5_core 5ed8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:09:23.668188 kernel: mlx5_core 5ed8:00:02.0 enP24280s1: renamed from eth1 Sep 13 00:09:23.708096 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (441) Sep 13 00:09:23.727086 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (445) Sep 13 00:09:23.743764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:09:23.788173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 13 00:09:23.804996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 13 00:09:23.816351 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 13 00:09:23.842129 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 13 00:09:23.857273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:09:23.883109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:23.894047 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:23.902043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:24.902121 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:24.902899 disk-uuid[599]: The operation has completed successfully. Sep 13 00:09:25.014798 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:09:25.014940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:09:25.045198 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:09:25.054874 sh[712]: Success Sep 13 00:09:25.090201 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:09:25.455216 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:09:25.465150 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:09:25.470719 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:09:25.491046 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:09:25.491106 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:25.495522 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:09:25.498112 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:09:25.500468 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:09:25.878641 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:09:25.882913 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:09:25.890303 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:09:25.898574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:09:25.915498 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:25.915563 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:25.917713 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:25.978092 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:25.991859 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:09:25.996162 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:26.007408 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:09:26.017281 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:09:26.026967 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:26.039293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:26.059457 systemd-networkd[896]: lo: Link UP Sep 13 00:09:26.059468 systemd-networkd[896]: lo: Gained carrier Sep 13 00:09:26.061757 systemd-networkd[896]: Enumeration completed Sep 13 00:09:26.061880 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:26.064859 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:26.064865 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:26.064925 systemd[1]: Reached target network.target - Network. Sep 13 00:09:26.140046 kernel: mlx5_core 5ed8:00:02.0 enP24280s1: Link up Sep 13 00:09:26.140368 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:09:26.181045 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: Data path switched to VF: enP24280s1 Sep 13 00:09:26.182065 systemd-networkd[896]: enP24280s1: Link UP Sep 13 00:09:26.182196 systemd-networkd[896]: eth0: Link UP Sep 13 00:09:26.182402 systemd-networkd[896]: eth0: Gained carrier Sep 13 00:09:26.182417 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:26.186331 systemd-networkd[896]: enP24280s1: Gained carrier Sep 13 00:09:26.224118 systemd-networkd[896]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:09:27.099689 ignition[891]: Ignition 2.19.0 Sep 13 00:09:27.099702 ignition[891]: Stage: fetch-offline Sep 13 00:09:27.099746 ignition[891]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.099757 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.099880 ignition[891]: parsed url from cmdline: "" Sep 13 00:09:27.099885 ignition[891]: no config URL provided Sep 13 00:09:27.099892 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:27.099904 ignition[891]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:27.099912 ignition[891]: failed to fetch config: resource requires networking Sep 13 00:09:27.101528 ignition[891]: Ignition finished successfully Sep 13 00:09:27.117392 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:27.126309 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:09:27.143848 ignition[904]: Ignition 2.19.0 Sep 13 00:09:27.143861 ignition[904]: Stage: fetch Sep 13 00:09:27.144164 ignition[904]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.144180 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.144316 ignition[904]: parsed url from cmdline: "" Sep 13 00:09:27.144319 ignition[904]: no config URL provided Sep 13 00:09:27.144324 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:27.144332 ignition[904]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:27.144351 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:09:27.264531 ignition[904]: GET result: OK Sep 13 00:09:27.264725 ignition[904]: config has been read from IMDS userdata Sep 13 00:09:27.264767 ignition[904]: parsing config with SHA512: e2c381b0f25a70bb6feba3e2c993d19d3ab02c7bc875604c71105d6ba1b582ae192814fbcfa11b26da7a9a1fb849f0fcbc29494e41dfaefad0c500ea21212f98 Sep 13 00:09:27.273332 unknown[904]: fetched base config from "system" Sep 13 00:09:27.273353 unknown[904]: fetched base config from "system" Sep 13 00:09:27.276426 ignition[904]: fetch: fetch complete Sep 13 00:09:27.273362 unknown[904]: fetched user config from "azure" Sep 13 00:09:27.276434 ignition[904]: fetch: fetch passed Sep 13 00:09:27.276500 ignition[904]: Ignition finished successfully Sep 13 00:09:27.286913 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:09:27.299301 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:09:27.317213 ignition[910]: Ignition 2.19.0 Sep 13 00:09:27.317224 ignition[910]: Stage: kargs Sep 13 00:09:27.317449 ignition[910]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.321225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:09:27.317462 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.318432 ignition[910]: kargs: kargs passed Sep 13 00:09:27.318486 ignition[910]: Ignition finished successfully Sep 13 00:09:27.335528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:09:27.355435 ignition[916]: Ignition 2.19.0 Sep 13 00:09:27.355447 ignition[916]: Stage: disks Sep 13 00:09:27.355692 ignition[916]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.358034 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:09:27.355707 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.361931 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:27.356589 ignition[916]: disks: disks passed Sep 13 00:09:27.366140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:27.356642 ignition[916]: Ignition finished successfully Sep 13 00:09:27.369183 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:27.373477 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:27.376003 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:27.410232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:09:27.470720 systemd-fsck[924]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 13 00:09:27.476494 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:09:27.492155 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:09:27.583367 systemd-networkd[896]: eth0: Gained IPv6LL Sep 13 00:09:27.586659 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:09:27.587769 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:09:27.592322 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:27.631285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:27.652080 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Sep 13 00:09:27.664199 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:27.664262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:27.664275 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:27.668479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:09:27.678313 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:09:27.684144 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:09:27.684185 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:27.688356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:09:27.707182 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:27.696542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:09:27.706789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:28.450136 coreos-metadata[950]: Sep 13 00:09:28.449 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:09:28.460190 coreos-metadata[950]: Sep 13 00:09:28.460 INFO Fetch successful Sep 13 00:09:28.462640 coreos-metadata[950]: Sep 13 00:09:28.460 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:09:28.471813 coreos-metadata[950]: Sep 13 00:09:28.471 INFO Fetch successful Sep 13 00:09:28.476302 coreos-metadata[950]: Sep 13 00:09:28.474 INFO wrote hostname ci-4081.3.5-n-e49e858a9f to /sysroot/etc/hostname Sep 13 00:09:28.481561 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:09:28.481274 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:28.517160 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:09:28.548710 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:09:28.567760 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:09:29.726199 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:29.739175 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:09:29.744770 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:09:29.756097 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:09:29.760476 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:29.794489 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:09:29.799568 ignition[1056]: INFO : Ignition 2.19.0 Sep 13 00:09:29.799568 ignition[1056]: INFO : Stage: mount Sep 13 00:09:29.803159 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:29.803159 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:29.803159 ignition[1056]: INFO : mount: mount passed Sep 13 00:09:29.803159 ignition[1056]: INFO : Ignition finished successfully Sep 13 00:09:29.801880 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:09:29.818259 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:09:29.826823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:29.851220 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1068) Sep 13 00:09:29.851297 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:29.854104 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:29.856561 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:29.863044 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:29.864918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:29.891048 ignition[1085]: INFO : Ignition 2.19.0 Sep 13 00:09:29.891048 ignition[1085]: INFO : Stage: files Sep 13 00:09:29.891048 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:29.891048 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:29.903292 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:09:29.907474 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:09:29.907474 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:09:30.137811 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:09:30.141592 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:09:30.141592 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:09:30.138317 unknown[1085]: wrote ssh authorized keys file for user: core Sep 13 00:09:30.175688 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:30.180697 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:09:30.524637 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:09:30.930963 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:30.930963 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:30.939516 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:30.939516 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:09:31.553818 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:09:33.066030 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:33.066030 ignition[1085]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:09:33.100860 ignition[1085]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:33.107561 ignition[1085]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:33.107561 ignition[1085]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:09:33.107561 ignition[1085]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: files passed Sep 13 00:09:33.122532 ignition[1085]: INFO : Ignition finished successfully Sep 13 00:09:33.109626 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:09:33.135922 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:09:33.152200 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:09:33.155320 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:09:33.155441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:09:33.167010 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:33.167010 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:33.171829 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:33.169276 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:33.183724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:09:33.199227 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:09:33.225242 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:09:33.225375 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:09:33.234087 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:09:33.236581 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:09:33.241062 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:09:33.250398 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:09:33.266757 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:33.279276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:09:33.289981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:33.293169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:33.299176 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:09:33.304387 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:09:33.304557 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:33.307669 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:09:33.308062 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:09:33.308516 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:09:33.308979 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:33.309361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:33.309753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:09:33.310164 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:33.310619 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:09:33.310991 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:09:33.312013 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:09:33.312489 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:09:33.312645 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:33.313332 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:33.313981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:33.314300 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:09:33.355070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:33.359231 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:09:33.359375 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:33.364560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:09:33.364739 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:33.369221 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:09:33.369379 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:09:33.374531 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:09:33.374690 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:33.427257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:09:33.430592 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:09:33.430808 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:33.447361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:09:33.450273 ignition[1137]: INFO : Ignition 2.19.0 Sep 13 00:09:33.450273 ignition[1137]: INFO : Stage: umount Sep 13 00:09:33.450273 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:33.450273 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:33.450273 ignition[1137]: INFO : umount: umount passed Sep 13 00:09:33.450273 ignition[1137]: INFO : Ignition finished successfully Sep 13 00:09:33.452133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:09:33.452413 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:33.457400 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:09:33.457524 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:33.466404 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:09:33.466521 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:09:33.471938 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:09:33.479036 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:09:33.487244 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:09:33.487359 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:09:33.490848 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:09:33.490913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:09:33.495186 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:09:33.495241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:09:33.505063 systemd[1]: Stopped target network.target - Network. Sep 13 00:09:33.509428 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:09:33.511623 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:33.527176 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:09:33.529409 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:09:33.534079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:33.540172 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:09:33.542362 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:09:33.546301 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:09:33.546365 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:33.550652 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:09:33.550705 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:33.554816 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:09:33.556843 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:09:33.567681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:09:33.567770 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:33.574534 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:09:33.579723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:09:33.581826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:09:33.588104 systemd-networkd[896]: eth0: DHCPv6 lease lost Sep 13 00:09:33.591318 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:09:33.591458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:09:33.596392 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:09:33.596491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:09:33.604105 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:09:33.604165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:33.617263 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:09:33.619237 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:09:33.619316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:33.630432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:09:33.630516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:33.638318 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:09:33.638400 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:33.643111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:09:33.643179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:33.648882 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:33.668720 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:09:33.670861 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:33.676883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:09:33.676949 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:33.684112 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:09:33.684171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:33.688712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:09:33.688782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:33.694191 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:09:33.694251 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:33.712076 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: Data path switched from VF: enP24280s1 Sep 13 00:09:33.698652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:33.698704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:33.718210 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:09:33.720808 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:09:33.720898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:33.729485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:33.729549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:33.740328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:09:33.740476 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:09:33.750731 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:09:33.750840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:09:34.231737 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:09:34.231871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:09:34.235108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:09:34.240174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:09:34.240254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:34.258229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:09:34.541840 systemd[1]: Switching root. Sep 13 00:09:34.631708 systemd-journald[176]: Journal stopped Sep 13 00:09:21.086389 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:09:21.086431 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:21.086446 kernel: BIOS-provided physical RAM map: Sep 13 00:09:21.086458 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:09:21.086468 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 13 00:09:21.086480 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Sep 13 00:09:21.086494 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Sep 13 00:09:21.086509 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Sep 13 00:09:21.086521 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 13 00:09:21.086532 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 13 00:09:21.086544 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 13 00:09:21.086555 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 13 00:09:21.086567 kernel: printk: bootconsole [earlyser0] enabled Sep 13 00:09:21.086579 kernel: NX (Execute Disable) protection: active Sep 13 00:09:21.086597 kernel: APIC: Static calls initialized Sep 13 00:09:21.086610 kernel: efi: EFI v2.7 by Microsoft Sep 13 00:09:21.086624 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Sep 13 00:09:21.086637 kernel: SMBIOS 3.1.0 present. Sep 13 00:09:21.086650 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Sep 13 00:09:21.086663 kernel: Hypervisor detected: Microsoft Hyper-V Sep 13 00:09:21.086676 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Sep 13 00:09:21.086689 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Sep 13 00:09:21.086702 kernel: Hyper-V: Nested features: 0x1e0101 Sep 13 00:09:21.086715 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 13 00:09:21.086730 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 13 00:09:21.086744 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:09:21.086757 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 13 00:09:21.086771 kernel: tsc: Marking TSC unstable due to running on Hyper-V Sep 13 00:09:21.086784 kernel: tsc: Detected 2593.905 MHz processor Sep 13 00:09:21.086798 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:09:21.086812 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:09:21.086825 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Sep 13 00:09:21.086839 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 13 00:09:21.086855 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:09:21.086868 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Sep 13 00:09:21.086881 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Sep 13 00:09:21.086894 kernel: Using GB pages for direct mapping Sep 13 00:09:21.086908 kernel: Secure boot disabled Sep 13 00:09:21.086921 kernel: ACPI: Early table checksum verification disabled Sep 13 00:09:21.086935 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 13 00:09:21.086955 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.086973 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.086987 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 13 00:09:21.087001 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 13 00:09:21.087016 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087030 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087045 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087063 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087077 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087091 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087106 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:09:21.087120 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 13 00:09:21.087134 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Sep 13 00:09:21.087148 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 13 00:09:21.087162 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 13 00:09:21.087179 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 13 00:09:21.087193 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 13 00:09:21.087208 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Sep 13 00:09:21.087221 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Sep 13 00:09:21.087236 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 13 00:09:21.087250 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Sep 13 00:09:21.087265 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:09:21.087279 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:09:21.087293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 13 00:09:21.087630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Sep 13 00:09:21.087649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Sep 13 00:09:21.087662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 13 00:09:21.087676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 13 00:09:21.087690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 13 00:09:21.087703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 13 00:09:21.087716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 13 00:09:21.087730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 13 00:09:21.087743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 13 00:09:21.087762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 13 00:09:21.087776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 13 00:09:21.087790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Sep 13 00:09:21.087804 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Sep 13 00:09:21.087818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Sep 13 00:09:21.087832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Sep 13 00:09:21.087845 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Sep 13 00:09:21.087857 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Sep 13 00:09:21.087868 kernel: Zone ranges: Sep 13 00:09:21.087883 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:09:21.087896 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:09:21.087910 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:09:21.087924 kernel: Movable zone start for each node Sep 13 00:09:21.087939 kernel: Early memory node ranges Sep 13 00:09:21.087953 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:09:21.087967 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Sep 13 00:09:21.087981 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 13 00:09:21.087992 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 13 00:09:21.088008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 13 00:09:21.088021 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:09:21.088035 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:09:21.088046 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Sep 13 00:09:21.088058 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 13 00:09:21.088072 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Sep 13 00:09:21.088084 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:09:21.088096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:09:21.088109 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:09:21.088126 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 13 00:09:21.088140 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:09:21.088153 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 13 00:09:21.088166 kernel: Booting paravirtualized kernel on Hyper-V Sep 13 00:09:21.088178 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:09:21.088191 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:09:21.088205 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:09:21.088219 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:09:21.088233 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:09:21.088250 kernel: Hyper-V: PV spinlocks enabled Sep 13 00:09:21.088264 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:09:21.088280 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:21.088296 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:09:21.089230 kernel: random: crng init done Sep 13 00:09:21.089246 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:09:21.089255 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:09:21.089264 kernel: Fallback order for Node 0: 0 Sep 13 00:09:21.089278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Sep 13 00:09:21.089299 kernel: Policy zone: Normal Sep 13 00:09:21.089326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:09:21.089337 kernel: software IO TLB: area num 2. Sep 13 00:09:21.089346 kernel: Memory: 8069608K/8387460K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 317592K reserved, 0K cma-reserved) Sep 13 00:09:21.089357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:09:21.089366 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:09:21.089375 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:09:21.089385 kernel: Dynamic Preempt: voluntary Sep 13 00:09:21.089394 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:09:21.089406 kernel: rcu: RCU event tracing is enabled. Sep 13 00:09:21.089418 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:09:21.089429 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:09:21.089438 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:09:21.089449 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:09:21.089458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:09:21.089471 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:09:21.089481 kernel: Using NULL legacy PIC Sep 13 00:09:21.089491 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 13 00:09:21.089500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:09:21.089511 kernel: Console: colour dummy device 80x25 Sep 13 00:09:21.089519 kernel: printk: console [tty1] enabled Sep 13 00:09:21.089527 kernel: printk: console [ttyS0] enabled Sep 13 00:09:21.089535 kernel: printk: bootconsole [earlyser0] disabled Sep 13 00:09:21.089543 kernel: ACPI: Core revision 20230628 Sep 13 00:09:21.089551 kernel: Failed to register legacy timer interrupt Sep 13 00:09:21.089562 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:09:21.089570 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 13 00:09:21.089578 kernel: Hyper-V: Using IPI hypercalls Sep 13 00:09:21.089586 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 13 00:09:21.089594 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 13 00:09:21.089602 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 13 00:09:21.089610 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 13 00:09:21.089618 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 13 00:09:21.089626 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 13 00:09:21.089636 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Sep 13 00:09:21.089644 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:09:21.089652 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:09:21.089661 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:09:21.089668 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:09:21.089676 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:09:21.089684 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:09:21.089692 kernel: RETBleed: Vulnerable Sep 13 00:09:21.089700 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:09:21.089710 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:21.089718 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:21.089726 kernel: active return thunk: its_return_thunk Sep 13 00:09:21.089734 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:09:21.089742 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:09:21.089750 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:09:21.089758 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:09:21.089766 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:09:21.089774 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:09:21.089782 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:09:21.089790 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:09:21.089802 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 13 00:09:21.089811 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 13 00:09:21.089819 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 13 00:09:21.089827 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Sep 13 00:09:21.089838 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:09:21.089847 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:09:21.089855 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:09:21.089866 kernel: landlock: Up and running. Sep 13 00:09:21.089874 kernel: SELinux: Initializing. Sep 13 00:09:21.089882 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.089891 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.089899 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:09:21.089909 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:21.089917 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:21.089926 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:21.089934 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:09:21.089942 kernel: signal: max sigframe size: 3632 Sep 13 00:09:21.089950 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:09:21.089959 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:09:21.089970 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:09:21.089979 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:09:21.089992 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:09:21.090001 kernel: .... node #0, CPUs: #1 Sep 13 00:09:21.090013 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Sep 13 00:09:21.090025 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:09:21.090035 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:09:21.090045 kernel: smpboot: Max logical packages: 1 Sep 13 00:09:21.090056 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Sep 13 00:09:21.090066 kernel: devtmpfs: initialized Sep 13 00:09:21.090079 kernel: x86/mm: Memory block size: 128MB Sep 13 00:09:21.090087 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 13 00:09:21.090099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:09:21.090107 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:09:21.090118 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:09:21.090126 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:09:21.090138 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:09:21.090146 kernel: audit: type=2000 audit(1757722159.028:1): state=initialized audit_enabled=0 res=1 Sep 13 00:09:21.090157 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:09:21.090168 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:09:21.090179 kernel: cpuidle: using governor menu Sep 13 00:09:21.090187 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:09:21.090199 kernel: dca service started, version 1.12.1 Sep 13 00:09:21.090207 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Sep 13 00:09:21.090219 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:09:21.090228 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:09:21.090239 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:09:21.090247 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:09:21.090260 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:09:21.090269 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:09:21.090280 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:09:21.090288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:09:21.090300 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:09:21.090321 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:09:21.090331 kernel: ACPI: Interpreter enabled Sep 13 00:09:21.090339 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:09:21.090347 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:09:21.090358 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:09:21.090366 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:09:21.090375 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 13 00:09:21.090383 kernel: iommu: Default domain type: Translated Sep 13 00:09:21.090391 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:09:21.090403 kernel: efivars: Registered efivars operations Sep 13 00:09:21.090411 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:09:21.090422 kernel: PCI: System does not support PCI Sep 13 00:09:21.090430 kernel: vgaarb: loaded Sep 13 00:09:21.090440 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Sep 13 00:09:21.090452 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:09:21.090461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:09:21.090471 kernel: pnp: PnP ACPI init Sep 13 00:09:21.090483 kernel: pnp: PnP ACPI: found 3 devices Sep 13 00:09:21.090493 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:09:21.090502 kernel: NET: Registered PF_INET protocol family Sep 13 00:09:21.090513 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:09:21.090524 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:09:21.090535 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:09:21.090547 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:09:21.090555 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:09:21.090563 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:09:21.090571 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.090580 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:09:21.090591 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:09:21.090599 kernel: NET: Registered PF_XDP protocol family Sep 13 00:09:21.090610 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:09:21.090621 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:09:21.090630 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Sep 13 00:09:21.090638 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:09:21.090648 kernel: Initialise system trusted keyrings Sep 13 00:09:21.090657 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:09:21.090665 kernel: Key type asymmetric registered Sep 13 00:09:21.090676 kernel: Asymmetric key parser 'x509' registered Sep 13 00:09:21.090684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:09:21.090696 kernel: io scheduler mq-deadline registered Sep 13 00:09:21.090706 kernel: io scheduler kyber registered Sep 13 00:09:21.090720 kernel: io scheduler bfq registered Sep 13 00:09:21.090728 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:09:21.090740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:09:21.090748 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:09:21.090756 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:09:21.090764 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:09:21.090941 kernel: rtc_cmos 00:02: registered as rtc0 Sep 13 00:09:21.091050 kernel: rtc_cmos 00:02: setting system clock to 2025-09-13T00:09:20 UTC (1757722160) Sep 13 00:09:21.091144 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 13 00:09:21.091157 kernel: intel_pstate: CPU model not supported Sep 13 00:09:21.091167 kernel: efifb: probing for efifb Sep 13 00:09:21.091177 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:09:21.091186 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:09:21.091196 kernel: efifb: scrolling: redraw Sep 13 00:09:21.091205 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:09:21.091219 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:09:21.091228 kernel: fb0: EFI VGA frame buffer device Sep 13 00:09:21.091239 kernel: pstore: Using crash dump compression: deflate Sep 13 00:09:21.091249 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:09:21.091262 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:09:21.091272 kernel: Segment Routing with IPv6 Sep 13 00:09:21.091284 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:09:21.091297 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:09:21.094998 kernel: Key type dns_resolver registered Sep 13 00:09:21.095030 kernel: IPI shorthand broadcast: enabled Sep 13 00:09:21.095053 kernel: sched_clock: Marking stable (834003000, 43139400)->(1065990600, -188848200) Sep 13 00:09:21.095069 kernel: registered taskstats version 1 Sep 13 00:09:21.095084 kernel: Loading compiled-in X.509 certificates Sep 13 00:09:21.095099 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:09:21.095114 kernel: Key type .fscrypt registered Sep 13 00:09:21.095128 kernel: Key type fscrypt-provisioning registered Sep 13 00:09:21.095143 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:09:21.095158 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:09:21.095176 kernel: ima: No architecture policies found Sep 13 00:09:21.095191 kernel: clk: Disabling unused clocks Sep 13 00:09:21.095206 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:09:21.095221 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:09:21.095235 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:09:21.095250 kernel: Run /init as init process Sep 13 00:09:21.095265 kernel: with arguments: Sep 13 00:09:21.095279 kernel: /init Sep 13 00:09:21.095294 kernel: with environment: Sep 13 00:09:21.095325 kernel: HOME=/ Sep 13 00:09:21.095344 kernel: TERM=linux Sep 13 00:09:21.095359 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:09:21.095378 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:21.095397 systemd[1]: Detected virtualization microsoft. Sep 13 00:09:21.095412 systemd[1]: Detected architecture x86-64. Sep 13 00:09:21.095428 systemd[1]: Running in initrd. Sep 13 00:09:21.095443 systemd[1]: No hostname configured, using default hostname. Sep 13 00:09:21.095461 systemd[1]: Hostname set to . Sep 13 00:09:21.095477 systemd[1]: Initializing machine ID from random generator. Sep 13 00:09:21.095492 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:09:21.095507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:21.095523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:21.095540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:09:21.095556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:21.095571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:09:21.095590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:09:21.095608 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:09:21.095624 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:09:21.095639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:21.095655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:21.095670 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:21.095686 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:21.095704 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:21.095720 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:21.095735 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:21.095751 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:21.095766 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:21.095782 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:21.095798 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:21.095813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:21.095828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:21.095847 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:21.095862 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:09:21.095878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:21.095894 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:09:21.095909 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:09:21.095925 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:21.095941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:21.095956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:21.096012 systemd-journald[176]: Collecting audit messages is disabled. Sep 13 00:09:21.096047 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:21.096063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:21.096078 systemd-journald[176]: Journal started Sep 13 00:09:21.096117 systemd-journald[176]: Runtime Journal (/run/log/journal/ad0c1eba6a4b46de9a1c7951a6f56aa0) is 8.0M, max 158.8M, 150.8M free. Sep 13 00:09:21.104341 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:21.105979 systemd-modules-load[177]: Inserted module 'overlay' Sep 13 00:09:21.112445 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:09:21.126506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:21.137542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:21.152906 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:09:21.153478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:21.161330 kernel: Bridge firewalling registered Sep 13 00:09:21.161345 systemd-modules-load[177]: Inserted module 'br_netfilter' Sep 13 00:09:21.162956 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:21.172014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:21.182498 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:21.195595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:21.204715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:21.207697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:21.224204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:21.232610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:21.235203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:21.241587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:21.257490 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:09:21.286135 dracut-cmdline[213]: dracut-dracut-053 Sep 13 00:09:21.289917 systemd-resolved[208]: Positive Trust Anchors: Sep 13 00:09:21.289932 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:21.296697 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:21.289986 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:21.296274 systemd-resolved[208]: Defaulting to hostname 'linux'. Sep 13 00:09:21.298620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:21.298757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:21.405345 kernel: SCSI subsystem initialized Sep 13 00:09:21.416335 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:09:21.427341 kernel: iscsi: registered transport (tcp) Sep 13 00:09:21.448675 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:09:21.448769 kernel: QLogic iSCSI HBA Driver Sep 13 00:09:21.485226 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:21.495509 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:09:21.522340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:09:21.522414 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:09:21.526740 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:09:21.567360 kernel: raid6: avx512x4 gen() 18344 MB/s Sep 13 00:09:21.586329 kernel: raid6: avx512x2 gen() 18322 MB/s Sep 13 00:09:21.604324 kernel: raid6: avx512x1 gen() 18123 MB/s Sep 13 00:09:21.622336 kernel: raid6: avx2x4 gen() 18246 MB/s Sep 13 00:09:21.641325 kernel: raid6: avx2x2 gen() 18323 MB/s Sep 13 00:09:21.661004 kernel: raid6: avx2x1 gen() 13813 MB/s Sep 13 00:09:21.661055 kernel: raid6: using algorithm avx512x4 gen() 18344 MB/s Sep 13 00:09:21.682354 kernel: raid6: .... xor() 8095 MB/s, rmw enabled Sep 13 00:09:21.682404 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:09:21.705336 kernel: xor: automatically using best checksumming function avx Sep 13 00:09:21.852347 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:09:21.862414 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:21.870459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:21.884228 systemd-udevd[395]: Using default interface naming scheme 'v255'. Sep 13 00:09:21.888917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:21.901594 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:09:21.914870 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Sep 13 00:09:21.942734 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:21.955503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:21.998972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:22.010547 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:09:22.033380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:22.043279 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:22.050601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:22.056387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:22.069591 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:09:22.089337 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:09:22.101343 kernel: hv_vmbus: Vmbus version:5.2 Sep 13 00:09:22.108267 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:22.121333 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:09:22.128011 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:22.132065 kernel: AES CTR mode by8 optimization enabled Sep 13 00:09:22.128138 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:22.139640 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:22.145998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:22.146226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:22.151365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:22.165726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:22.178693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:23.024724 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:09:23.024756 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:09:23.024771 kernel: PTP clock support registered Sep 13 00:09:23.024795 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:09:23.024806 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:09:23.024818 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:09:23.024830 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:09:23.024842 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:09:23.024856 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:09:23.024867 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:09:23.024878 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:09:23.024888 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 00:09:23.024902 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:09:23.024914 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 00:09:23.024926 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:09:22.184895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:22.978480 systemd-resolved[208]: Clock change detected. Flushing caches. Sep 13 00:09:23.041041 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:09:23.046471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:23.057540 kernel: scsi host0: storvsc_host_t Sep 13 00:09:23.057633 kernel: scsi host1: storvsc_host_t Sep 13 00:09:23.064373 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:09:23.066709 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 00:09:23.083345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:23.098072 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:09:23.098373 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:09:23.103875 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:09:23.100535 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:23.122908 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:09:23.123329 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:09:23.127442 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:09:23.127683 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:09:23.123911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:23.134413 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:09:23.139060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:23.142047 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:09:23.190058 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: VF slot 1 added Sep 13 00:09:23.200494 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:09:23.200565 kernel: hv_pci 66abed60-5ed8-4d24-a750-5fb03832036f: PCI VMBus probing: Using version 0x10004 Sep 13 00:09:23.207582 kernel: hv_pci 66abed60-5ed8-4d24-a750-5fb03832036f: PCI host bridge to bus 5ed8:00 Sep 13 00:09:23.207933 kernel: pci_bus 5ed8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Sep 13 00:09:23.210319 kernel: pci_bus 5ed8:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:09:23.215045 kernel: pci 5ed8:00:02.0: [15b3:1016] type 00 class 0x020000 Sep 13 00:09:23.219050 kernel: pci 5ed8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:09:23.223390 kernel: pci 5ed8:00:02.0: enabling Extended Tags Sep 13 00:09:23.233068 kernel: pci 5ed8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5ed8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Sep 13 00:09:23.239963 kernel: pci_bus 5ed8:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:09:23.240371 kernel: pci 5ed8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Sep 13 00:09:23.413380 kernel: mlx5_core 5ed8:00:02.0: enabling device (0000 -> 0002) Sep 13 00:09:23.418057 kernel: mlx5_core 5ed8:00:02.0: firmware version: 14.30.5000 Sep 13 00:09:23.648043 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: VF registering: eth1 Sep 13 00:09:23.650260 kernel: mlx5_core 5ed8:00:02.0 eth1: joined to eth0 Sep 13 00:09:23.655606 kernel: mlx5_core 5ed8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:09:23.668188 kernel: mlx5_core 5ed8:00:02.0 enP24280s1: renamed from eth1 Sep 13 00:09:23.708096 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (441) Sep 13 00:09:23.727086 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (445) Sep 13 00:09:23.743764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:09:23.788173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 13 00:09:23.804996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 13 00:09:23.816351 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 13 00:09:23.842129 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 13 00:09:23.857273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:09:23.883109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:23.894047 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:23.902043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:24.902121 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:09:24.902899 disk-uuid[599]: The operation has completed successfully. Sep 13 00:09:25.014798 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:09:25.014940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:09:25.045198 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:09:25.054874 sh[712]: Success Sep 13 00:09:25.090201 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:09:25.455216 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:09:25.465150 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:09:25.470719 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:09:25.491046 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:09:25.491106 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:25.495522 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:09:25.498112 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:09:25.500468 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:09:25.878641 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:09:25.882913 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:09:25.890303 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:09:25.898574 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:09:25.915498 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:25.915563 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:25.917713 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:25.978092 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:25.991859 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:09:25.996162 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:26.007408 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:09:26.017281 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:09:26.026967 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:26.039293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:26.059457 systemd-networkd[896]: lo: Link UP Sep 13 00:09:26.059468 systemd-networkd[896]: lo: Gained carrier Sep 13 00:09:26.061757 systemd-networkd[896]: Enumeration completed Sep 13 00:09:26.061880 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:26.064859 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:26.064865 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:26.064925 systemd[1]: Reached target network.target - Network. Sep 13 00:09:26.140046 kernel: mlx5_core 5ed8:00:02.0 enP24280s1: Link up Sep 13 00:09:26.140368 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:09:26.181045 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: Data path switched to VF: enP24280s1 Sep 13 00:09:26.182065 systemd-networkd[896]: enP24280s1: Link UP Sep 13 00:09:26.182196 systemd-networkd[896]: eth0: Link UP Sep 13 00:09:26.182402 systemd-networkd[896]: eth0: Gained carrier Sep 13 00:09:26.182417 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:26.186331 systemd-networkd[896]: enP24280s1: Gained carrier Sep 13 00:09:26.224118 systemd-networkd[896]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:09:27.099689 ignition[891]: Ignition 2.19.0 Sep 13 00:09:27.099702 ignition[891]: Stage: fetch-offline Sep 13 00:09:27.099746 ignition[891]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.099757 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.099880 ignition[891]: parsed url from cmdline: "" Sep 13 00:09:27.099885 ignition[891]: no config URL provided Sep 13 00:09:27.099892 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:27.099904 ignition[891]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:27.099912 ignition[891]: failed to fetch config: resource requires networking Sep 13 00:09:27.101528 ignition[891]: Ignition finished successfully Sep 13 00:09:27.117392 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:27.126309 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:09:27.143848 ignition[904]: Ignition 2.19.0 Sep 13 00:09:27.143861 ignition[904]: Stage: fetch Sep 13 00:09:27.144164 ignition[904]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.144180 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.144316 ignition[904]: parsed url from cmdline: "" Sep 13 00:09:27.144319 ignition[904]: no config URL provided Sep 13 00:09:27.144324 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:27.144332 ignition[904]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:27.144351 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:09:27.264531 ignition[904]: GET result: OK Sep 13 00:09:27.264725 ignition[904]: config has been read from IMDS userdata Sep 13 00:09:27.264767 ignition[904]: parsing config with SHA512: e2c381b0f25a70bb6feba3e2c993d19d3ab02c7bc875604c71105d6ba1b582ae192814fbcfa11b26da7a9a1fb849f0fcbc29494e41dfaefad0c500ea21212f98 Sep 13 00:09:27.273332 unknown[904]: fetched base config from "system" Sep 13 00:09:27.273353 unknown[904]: fetched base config from "system" Sep 13 00:09:27.276426 ignition[904]: fetch: fetch complete Sep 13 00:09:27.273362 unknown[904]: fetched user config from "azure" Sep 13 00:09:27.276434 ignition[904]: fetch: fetch passed Sep 13 00:09:27.276500 ignition[904]: Ignition finished successfully Sep 13 00:09:27.286913 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:09:27.299301 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:09:27.317213 ignition[910]: Ignition 2.19.0 Sep 13 00:09:27.317224 ignition[910]: Stage: kargs Sep 13 00:09:27.317449 ignition[910]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.321225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:09:27.317462 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.318432 ignition[910]: kargs: kargs passed Sep 13 00:09:27.318486 ignition[910]: Ignition finished successfully Sep 13 00:09:27.335528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:09:27.355435 ignition[916]: Ignition 2.19.0 Sep 13 00:09:27.355447 ignition[916]: Stage: disks Sep 13 00:09:27.355692 ignition[916]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:27.358034 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:09:27.355707 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:27.361931 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:27.356589 ignition[916]: disks: disks passed Sep 13 00:09:27.366140 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:27.356642 ignition[916]: Ignition finished successfully Sep 13 00:09:27.369183 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:27.373477 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:27.376003 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:27.410232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:09:27.470720 systemd-fsck[924]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 13 00:09:27.476494 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:09:27.492155 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:09:27.583367 systemd-networkd[896]: eth0: Gained IPv6LL Sep 13 00:09:27.586659 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:09:27.587769 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:09:27.592322 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:27.631285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:27.652080 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Sep 13 00:09:27.664199 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:27.664262 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:27.664275 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:27.668479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:09:27.678313 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:09:27.684144 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:09:27.684185 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:27.688356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:09:27.707182 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:27.696542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:09:27.706789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:28.450136 coreos-metadata[950]: Sep 13 00:09:28.449 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:09:28.460190 coreos-metadata[950]: Sep 13 00:09:28.460 INFO Fetch successful Sep 13 00:09:28.462640 coreos-metadata[950]: Sep 13 00:09:28.460 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:09:28.471813 coreos-metadata[950]: Sep 13 00:09:28.471 INFO Fetch successful Sep 13 00:09:28.476302 coreos-metadata[950]: Sep 13 00:09:28.474 INFO wrote hostname ci-4081.3.5-n-e49e858a9f to /sysroot/etc/hostname Sep 13 00:09:28.481561 initrd-setup-root[966]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:09:28.481274 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:28.517160 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:09:28.548710 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:09:28.567760 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:09:29.726199 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:29.739175 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:09:29.744770 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:09:29.756097 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:09:29.760476 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:29.794489 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:09:29.799568 ignition[1056]: INFO : Ignition 2.19.0 Sep 13 00:09:29.799568 ignition[1056]: INFO : Stage: mount Sep 13 00:09:29.803159 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:29.803159 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:29.803159 ignition[1056]: INFO : mount: mount passed Sep 13 00:09:29.803159 ignition[1056]: INFO : Ignition finished successfully Sep 13 00:09:29.801880 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:09:29.818259 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:09:29.826823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:29.851220 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1068) Sep 13 00:09:29.851297 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:29.854104 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:29.856561 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:09:29.863044 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:09:29.864918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:29.891048 ignition[1085]: INFO : Ignition 2.19.0 Sep 13 00:09:29.891048 ignition[1085]: INFO : Stage: files Sep 13 00:09:29.891048 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:29.891048 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:29.903292 ignition[1085]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:09:29.907474 ignition[1085]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:09:29.907474 ignition[1085]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:09:30.137811 ignition[1085]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:09:30.141592 ignition[1085]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:09:30.141592 ignition[1085]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:09:30.138317 unknown[1085]: wrote ssh authorized keys file for user: core Sep 13 00:09:30.175688 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:30.180697 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:09:30.524637 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:09:30.930963 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:09:30.930963 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:30.939516 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:30.939516 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:30.947852 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:09:31.553818 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:09:33.066030 ignition[1085]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:09:33.066030 ignition[1085]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:09:33.100860 ignition[1085]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:33.107561 ignition[1085]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:33.107561 ignition[1085]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:09:33.107561 ignition[1085]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:33.122532 ignition[1085]: INFO : files: files passed Sep 13 00:09:33.122532 ignition[1085]: INFO : Ignition finished successfully Sep 13 00:09:33.109626 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:09:33.135922 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:09:33.152200 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:09:33.155320 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:09:33.155441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:09:33.167010 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:33.167010 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:33.171829 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:33.169276 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:33.183724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:09:33.199227 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:09:33.225242 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:09:33.225375 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:09:33.234087 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:09:33.236581 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:09:33.241062 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:09:33.250398 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:09:33.266757 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:33.279276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:09:33.289981 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:33.293169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:33.299176 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:09:33.304387 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:09:33.304557 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:33.307669 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:09:33.308062 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:09:33.308516 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:09:33.308979 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:33.309361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:33.309753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:09:33.310164 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:33.310619 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:09:33.310991 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:09:33.312013 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:09:33.312489 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:09:33.312645 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:33.313332 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:33.313981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:33.314300 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:09:33.355070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:33.359231 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:09:33.359375 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:33.364560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:09:33.364739 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:33.369221 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:09:33.369379 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:09:33.374531 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:09:33.374690 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:33.427257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:09:33.430592 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:09:33.430808 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:33.447361 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:09:33.450273 ignition[1137]: INFO : Ignition 2.19.0 Sep 13 00:09:33.450273 ignition[1137]: INFO : Stage: umount Sep 13 00:09:33.450273 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:33.450273 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:09:33.450273 ignition[1137]: INFO : umount: umount passed Sep 13 00:09:33.450273 ignition[1137]: INFO : Ignition finished successfully Sep 13 00:09:33.452133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:09:33.452413 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:33.457400 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:09:33.457524 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:33.466404 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:09:33.466521 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:09:33.471938 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:09:33.479036 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:09:33.487244 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:09:33.487359 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:09:33.490848 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:09:33.490913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:09:33.495186 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:09:33.495241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:09:33.505063 systemd[1]: Stopped target network.target - Network. Sep 13 00:09:33.509428 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:09:33.511623 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:33.527176 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:09:33.529409 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:09:33.534079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:33.540172 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:09:33.542362 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:09:33.546301 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:09:33.546365 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:33.550652 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:09:33.550705 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:33.554816 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:09:33.556843 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:09:33.567681 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:09:33.567770 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:33.574534 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:09:33.579723 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:09:33.581826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:09:33.588104 systemd-networkd[896]: eth0: DHCPv6 lease lost Sep 13 00:09:33.591318 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:09:33.591458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:09:33.596392 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:09:33.596491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:09:33.604105 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:09:33.604165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:33.617263 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:09:33.619237 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:09:33.619316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:33.630432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:09:33.630516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:33.638318 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:09:33.638400 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:33.643111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:09:33.643179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:33.648882 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:33.668720 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:09:33.670861 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:33.676883 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:09:33.676949 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:33.684112 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:09:33.684171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:33.688712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:09:33.688782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:33.694191 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:09:33.694251 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:33.712076 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: Data path switched from VF: enP24280s1 Sep 13 00:09:33.698652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:33.698704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:33.718210 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:09:33.720808 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:09:33.720898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:33.729485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:33.729549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:33.740328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:09:33.740476 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:09:33.750731 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:09:33.750840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:09:34.231737 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:09:34.231871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:09:34.235108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:09:34.240174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:09:34.240254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:34.258229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:09:34.541840 systemd[1]: Switching root. Sep 13 00:09:34.631708 systemd-journald[176]: Journal stopped Sep 13 00:09:42.155285 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Sep 13 00:09:42.155340 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:09:42.155364 kernel: SELinux: policy capability open_perms=1 Sep 13 00:09:42.155383 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:09:42.155401 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:09:42.155416 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:09:42.155437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:09:42.155461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:09:42.155478 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:09:42.155495 kernel: audit: type=1403 audit(1757722176.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:09:42.155518 systemd[1]: Successfully loaded SELinux policy in 180.989ms. Sep 13 00:09:42.155546 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.422ms. Sep 13 00:09:42.155567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:42.155586 systemd[1]: Detected virtualization microsoft. Sep 13 00:09:42.155610 systemd[1]: Detected architecture x86-64. Sep 13 00:09:42.155625 systemd[1]: Detected first boot. Sep 13 00:09:42.155638 systemd[1]: Hostname set to . Sep 13 00:09:42.155652 systemd[1]: Initializing machine ID from random generator. Sep 13 00:09:42.155665 zram_generator::config[1180]: No configuration found. Sep 13 00:09:42.155685 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:09:42.155698 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:09:42.155708 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:09:42.155718 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:09:42.155728 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:09:42.155738 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:09:42.155748 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:09:42.155760 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:09:42.155770 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:09:42.155780 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:09:42.155792 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:09:42.155805 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:09:42.155815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:42.155825 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:42.155835 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:09:42.155847 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:09:42.155858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:09:42.155868 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:42.155878 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:09:42.155887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:42.155900 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:09:42.155914 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:09:42.155925 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:42.155940 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:09:42.155953 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:42.155964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:42.155976 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:42.155989 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:42.156002 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:09:42.156012 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:09:42.156059 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:42.156075 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:42.156089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:42.156101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:09:42.156113 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:09:42.156127 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:09:42.156141 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:09:42.156154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:42.156166 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:09:42.156180 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:09:42.156190 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:09:42.156204 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:09:42.156215 systemd[1]: Reached target machines.target - Containers. Sep 13 00:09:42.156230 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:09:42.156244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:42.156255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:42.156267 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:09:42.156279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:42.156294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:42.156308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:42.156320 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:09:42.156332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:42.156348 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:09:42.156361 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:09:42.156372 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:09:42.156385 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:09:42.156397 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:09:42.156409 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:42.156423 kernel: fuse: init (API version 7.39) Sep 13 00:09:42.156433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:42.156448 kernel: loop: module loaded Sep 13 00:09:42.156458 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:09:42.156470 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:09:42.156481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:42.156494 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:09:42.156504 systemd[1]: Stopped verity-setup.service. Sep 13 00:09:42.156518 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:42.156551 systemd-journald[1262]: Collecting audit messages is disabled. Sep 13 00:09:42.156580 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:09:42.156593 systemd-journald[1262]: Journal started Sep 13 00:09:42.156617 systemd-journald[1262]: Runtime Journal (/run/log/journal/c4170fc4a33747f1bfac283ed382aba5) is 8.0M, max 158.8M, 150.8M free. Sep 13 00:09:41.359501 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:09:41.538005 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:09:41.538417 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:09:42.163446 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:42.164170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:09:42.166946 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:09:42.169296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:09:42.171945 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:09:42.174607 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:09:42.177168 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:42.180853 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:09:42.184122 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:09:42.184411 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:09:42.187585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:42.187753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:42.190694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:42.190859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:42.194156 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:09:42.194319 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:09:42.197253 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:42.197408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:42.200783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:09:42.204275 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:09:42.218535 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:09:42.231152 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:09:42.239110 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:09:42.243000 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:09:42.243070 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:42.250626 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:09:42.256245 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:09:42.269288 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:09:42.271898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:42.297556 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:09:42.310520 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:09:42.313699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:42.316214 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:09:42.320161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:42.322541 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:09:42.329232 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:09:42.336258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:42.340222 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:09:42.344450 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:09:42.355800 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:09:42.364136 systemd-journald[1262]: Time spent on flushing to /var/log/journal/c4170fc4a33747f1bfac283ed382aba5 is 33.243ms for 955 entries. Sep 13 00:09:42.364136 systemd-journald[1262]: System Journal (/var/log/journal/c4170fc4a33747f1bfac283ed382aba5) is 8.0M, max 2.6G, 2.6G free. Sep 13 00:09:42.429414 systemd-journald[1262]: Received client request to flush runtime journal. Sep 13 00:09:42.429462 kernel: ACPI: bus type drm_connector registered Sep 13 00:09:42.392363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:42.397710 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:42.397917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:42.401004 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:42.419217 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:09:42.423786 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:09:42.427624 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:09:42.437306 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:09:42.440640 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:09:42.449973 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:09:42.458752 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:09:42.506142 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:09:42.506984 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:09:42.552055 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:09:42.635064 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:09:42.637876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:43.088247 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:09:43.097356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:43.300948 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 13 00:09:43.300975 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Sep 13 00:09:43.320035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:43.352052 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:09:44.254051 kernel: loop3: detected capacity change from 0 to 31056 Sep 13 00:09:44.573068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:09:44.583208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:44.606844 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Sep 13 00:09:45.010053 kernel: loop4: detected capacity change from 0 to 224512 Sep 13 00:09:45.028044 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:09:45.045048 kernel: loop6: detected capacity change from 0 to 142488 Sep 13 00:09:45.066050 kernel: loop7: detected capacity change from 0 to 31056 Sep 13 00:09:45.078319 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 13 00:09:45.078787 (sd-merge)[1343]: Merged extensions into '/usr'. Sep 13 00:09:45.083200 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:09:45.083218 systemd[1]: Reloading... Sep 13 00:09:45.137274 zram_generator::config[1365]: No configuration found. Sep 13 00:09:45.304415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:45.364434 systemd[1]: Reloading finished in 280 ms. Sep 13 00:09:45.394988 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:09:45.414261 systemd[1]: Starting ensure-sysext.service... Sep 13 00:09:45.417578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:45.443078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:45.456267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:45.482102 systemd[1]: Reloading requested from client PID 1427 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:09:45.482123 systemd[1]: Reloading... Sep 13 00:09:45.580051 zram_generator::config[1474]: No configuration found. Sep 13 00:09:45.616459 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:09:45.619199 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:09:45.623402 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:09:45.626476 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Sep 13 00:09:45.627890 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Sep 13 00:09:45.715274 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:45.716106 systemd-tmpfiles[1428]: Skipping /boot Sep 13 00:09:45.725063 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:09:45.759878 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:45.761784 systemd-tmpfiles[1428]: Skipping /boot Sep 13 00:09:45.797998 kernel: hv_vmbus: registering driver hv_balloon Sep 13 00:09:45.802057 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 00:09:45.802141 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 00:09:45.810465 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 00:09:45.816209 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 00:09:45.822769 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:09:45.828015 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:09:46.002285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:46.099043 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1442) Sep 13 00:09:46.240472 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:09:46.241445 systemd[1]: Reloading finished in 758 ms. Sep 13 00:09:46.267051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:46.304118 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 13 00:09:46.348074 systemd[1]: Finished ensure-sysext.service. Sep 13 00:09:46.362833 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:09:46.365944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:46.371233 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:46.399372 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:09:46.404168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:46.408281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:46.413250 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:46.425371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:46.431326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:46.436314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:46.445303 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:09:46.450957 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:09:46.461313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:46.464376 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:09:46.472383 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:09:46.484299 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:09:46.494290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:46.497173 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:46.502430 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:09:46.508834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:46.510125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:46.516633 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:46.516861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:46.521380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:46.521586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:46.525544 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:46.525739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:46.556463 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:09:46.557571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:46.557726 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:46.567213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:09:46.586272 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:09:46.591178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:09:46.636648 lvm[1618]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:46.683098 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:09:46.687509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:46.697331 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:09:46.719425 lvm[1635]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:46.751545 systemd-resolved[1602]: Positive Trust Anchors: Sep 13 00:09:46.751955 systemd-resolved[1602]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:46.752008 systemd-resolved[1602]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:46.754319 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:09:46.800637 augenrules[1639]: No rules Sep 13 00:09:46.802448 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:46.814428 systemd-networkd[1436]: lo: Link UP Sep 13 00:09:46.814438 systemd-networkd[1436]: lo: Gained carrier Sep 13 00:09:46.817126 systemd-networkd[1436]: Enumeration completed Sep 13 00:09:46.817539 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:46.817549 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:46.817909 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:46.823314 systemd-resolved[1602]: Using system hostname 'ci-4081.3.5-n-e49e858a9f'. Sep 13 00:09:46.828275 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:09:46.872055 kernel: mlx5_core 5ed8:00:02.0 enP24280s1: Link up Sep 13 00:09:46.872438 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:09:46.894760 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:09:46.899428 kernel: hv_netvsc 7ced8d74-68e5-7ced-8d74-68e57ced8d74 eth0: Data path switched to VF: enP24280s1 Sep 13 00:09:46.900309 systemd-networkd[1436]: enP24280s1: Link UP Sep 13 00:09:46.900469 systemd-networkd[1436]: eth0: Link UP Sep 13 00:09:46.900473 systemd-networkd[1436]: eth0: Gained carrier Sep 13 00:09:46.900493 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:46.905512 systemd-networkd[1436]: enP24280s1: Gained carrier Sep 13 00:09:46.906137 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:46.909305 systemd[1]: Reached target network.target - Network. Sep 13 00:09:46.911578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:46.934095 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:09:48.119571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:48.254288 systemd-networkd[1436]: eth0: Gained IPv6LL Sep 13 00:09:48.256891 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:09:48.261281 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:09:49.306347 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:09:49.310115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:09:53.383745 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:09:53.394164 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:09:53.403288 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:09:53.434367 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:09:53.438191 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:53.441136 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:09:53.444221 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:09:53.447224 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:09:53.449620 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:09:53.452411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:09:53.455095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:09:53.455136 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:53.457069 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:53.486499 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:09:53.491056 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:09:53.588845 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:09:53.592845 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:09:53.595708 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:53.597884 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:53.600182 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:53.600218 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:53.625173 systemd[1]: Starting chronyd.service - NTP client/server... Sep 13 00:09:53.630121 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:09:53.640214 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:09:53.648271 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:09:53.657231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:09:53.669318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:09:53.673712 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:09:53.673776 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 13 00:09:53.677131 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 13 00:09:53.681609 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 13 00:09:53.683097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:53.685363 (chronyd)[1658]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 13 00:09:53.694563 jq[1664]: false Sep 13 00:09:53.694705 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:09:53.701234 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:09:53.708652 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:09:53.717286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:09:53.730247 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:09:53.738217 KVP[1666]: KVP starting; pid is:1666 Sep 13 00:09:53.739458 chronyd[1676]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 13 00:09:53.742747 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:09:53.745843 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:09:53.747271 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:09:53.749326 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:09:53.755234 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:09:53.760336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:09:53.760621 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:09:53.801138 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:09:53.801374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:09:53.809803 chronyd[1676]: Timezone right/UTC failed leap second check, ignoring Sep 13 00:09:53.810241 chronyd[1676]: Loaded seccomp filter (level 2) Sep 13 00:09:53.815445 systemd[1]: Started chronyd.service - NTP client/server. Sep 13 00:09:53.831448 jq[1679]: true Sep 13 00:09:53.841281 kernel: hv_utils: KVP IC version 4.0 Sep 13 00:09:53.842495 KVP[1666]: KVP LIC Version: 3.1 Sep 13 00:09:53.847649 extend-filesystems[1665]: Found loop4 Sep 13 00:09:53.847649 extend-filesystems[1665]: Found loop5 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found loop6 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found loop7 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda1 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda2 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda3 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found usr Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda4 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda6 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda7 Sep 13 00:09:53.863448 extend-filesystems[1665]: Found sda9 Sep 13 00:09:53.863448 extend-filesystems[1665]: Checking size of /dev/sda9 Sep 13 00:09:53.936196 update_engine[1678]: I20250913 00:09:53.882123 1678 main.cc:92] Flatcar Update Engine starting Sep 13 00:09:53.936485 extend-filesystems[1665]: Old size kept for /dev/sda9 Sep 13 00:09:53.936485 extend-filesystems[1665]: Found sr0 Sep 13 00:09:53.863493 (ntainerd)[1691]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:09:53.960638 tar[1683]: linux-amd64/LICENSE Sep 13 00:09:53.960638 tar[1683]: linux-amd64/helm Sep 13 00:09:53.877728 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:09:53.975225 jq[1699]: true Sep 13 00:09:53.877953 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:09:53.918828 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:09:53.919116 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:09:53.928172 systemd-logind[1677]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 13 00:09:53.935384 systemd-logind[1677]: New seat seat0. Sep 13 00:09:53.939750 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:09:53.970795 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:09:54.117740 bash[1747]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:54.121553 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:09:54.128174 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:09:54.156590 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1720) Sep 13 00:09:54.153648 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:09:54.153425 dbus-daemon[1661]: [system] SELinux support is enabled Sep 13 00:09:54.164494 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:09:54.165416 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:09:54.169401 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:09:54.170055 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:09:54.176997 dbus-daemon[1661]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:09:54.181876 update_engine[1678]: I20250913 00:09:54.181686 1678 update_check_scheduler.cc:74] Next update check in 9m22s Sep 13 00:09:54.182148 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:09:54.194859 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:09:54.385991 coreos-metadata[1660]: Sep 13 00:09:54.383 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:09:54.392971 coreos-metadata[1660]: Sep 13 00:09:54.391 INFO Fetch successful Sep 13 00:09:54.392971 coreos-metadata[1660]: Sep 13 00:09:54.391 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 13 00:09:54.400989 coreos-metadata[1660]: Sep 13 00:09:54.400 INFO Fetch successful Sep 13 00:09:54.401737 coreos-metadata[1660]: Sep 13 00:09:54.401 INFO Fetching http://168.63.129.16/machine/cba4b159-4780-4e54-ae92-b1caab3faf9b/8b91ac1e%2D8ae2%2D499d%2Da17c%2D0c10fb72780b.%5Fci%2D4081.3.5%2Dn%2De49e858a9f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 13 00:09:54.404234 coreos-metadata[1660]: Sep 13 00:09:54.404 INFO Fetch successful Sep 13 00:09:54.408707 coreos-metadata[1660]: Sep 13 00:09:54.405 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:09:54.419135 coreos-metadata[1660]: Sep 13 00:09:54.419 INFO Fetch successful Sep 13 00:09:54.495381 sshd_keygen[1709]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:09:54.500210 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:09:54.504674 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:09:54.549666 locksmithd[1751]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:09:54.559598 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:09:54.572469 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:09:54.587434 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 13 00:09:54.624568 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:09:54.624816 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:09:54.638382 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:09:54.652619 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 13 00:09:54.684204 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:09:54.703709 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:09:54.710915 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:09:54.714631 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:09:54.996249 tar[1683]: linux-amd64/README.md Sep 13 00:09:55.025577 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:09:55.034055 containerd[1691]: time="2025-09-13T00:09:55.033961100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:09:55.073916 containerd[1691]: time="2025-09-13T00:09:55.073839500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.075795 containerd[1691]: time="2025-09-13T00:09:55.075745800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:55.075795 containerd[1691]: time="2025-09-13T00:09:55.075785200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:09:55.075971 containerd[1691]: time="2025-09-13T00:09:55.075807200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:09:55.076049 containerd[1691]: time="2025-09-13T00:09:55.076009300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:09:55.076092 containerd[1691]: time="2025-09-13T00:09:55.076054800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076163 containerd[1691]: time="2025-09-13T00:09:55.076140800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076163 containerd[1691]: time="2025-09-13T00:09:55.076158600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076376 containerd[1691]: time="2025-09-13T00:09:55.076350000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076430 containerd[1691]: time="2025-09-13T00:09:55.076378100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076430 containerd[1691]: time="2025-09-13T00:09:55.076398200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076430 containerd[1691]: time="2025-09-13T00:09:55.076412100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076536 containerd[1691]: time="2025-09-13T00:09:55.076512900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076757 containerd[1691]: time="2025-09-13T00:09:55.076728200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076898 containerd[1691]: time="2025-09-13T00:09:55.076873600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:55.076898 containerd[1691]: time="2025-09-13T00:09:55.076893500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:09:55.077032 containerd[1691]: time="2025-09-13T00:09:55.076999100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:09:55.077111 containerd[1691]: time="2025-09-13T00:09:55.077089900Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.094322500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.094422400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.094447100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.094471700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.094496700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.094697800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095011100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095167600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095191400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095210500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095231800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095251800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095271800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096049 containerd[1691]: time="2025-09-13T00:09:55.095292500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095312400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095333500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095366800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095385500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095412400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095432100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095449500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095468100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095484900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095502600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095521700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095545500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095565600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096524 containerd[1691]: time="2025-09-13T00:09:55.095589900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095607600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095627400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095650900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095673600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095700600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095716600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095745800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095799200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095823000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095839700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095856300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095869900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095886800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:09:55.096993 containerd[1691]: time="2025-09-13T00:09:55.095928500Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:09:55.097476 containerd[1691]: time="2025-09-13T00:09:55.095944100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:09:55.097804 containerd[1691]: time="2025-09-13T00:09:55.097740000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:09:55.098038 containerd[1691]: time="2025-09-13T00:09:55.098007200Z" level=info msg="Connect containerd service" Sep 13 00:09:55.098163 containerd[1691]: time="2025-09-13T00:09:55.098148100Z" level=info msg="using legacy CRI server" Sep 13 00:09:55.098223 containerd[1691]: time="2025-09-13T00:09:55.098210200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:09:55.098413 containerd[1691]: time="2025-09-13T00:09:55.098393800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:09:55.099792 containerd[1691]: time="2025-09-13T00:09:55.099740500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:55.100712 containerd[1691]: time="2025-09-13T00:09:55.100687900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:09:55.100882 containerd[1691]: time="2025-09-13T00:09:55.100854600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:09:55.101333 containerd[1691]: time="2025-09-13T00:09:55.101288500Z" level=info msg="Start subscribing containerd event" Sep 13 00:09:55.101479 containerd[1691]: time="2025-09-13T00:09:55.101453000Z" level=info msg="Start recovering state" Sep 13 00:09:55.101658 containerd[1691]: time="2025-09-13T00:09:55.101641700Z" level=info msg="Start event monitor" Sep 13 00:09:55.101754 containerd[1691]: time="2025-09-13T00:09:55.101741000Z" level=info msg="Start snapshots syncer" Sep 13 00:09:55.101824 containerd[1691]: time="2025-09-13T00:09:55.101812500Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:09:55.101916 containerd[1691]: time="2025-09-13T00:09:55.101874700Z" level=info msg="Start streaming server" Sep 13 00:09:55.102455 containerd[1691]: time="2025-09-13T00:09:55.102429100Z" level=info msg="containerd successfully booted in 0.069580s" Sep 13 00:09:55.102650 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:09:55.638963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:55.642317 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:09:55.644923 systemd[1]: Startup finished in 1.084s (firmware) + 30.204s (loader) + 975ms (kernel) + 14.610s (initrd) + 19.549s (userspace) = 1min 6.424s. Sep 13 00:09:55.651819 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:56.318785 kubelet[1824]: E0913 00:09:56.318732 1824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:56.320529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:56.320658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:56.320983 systemd[1]: kubelet.service: Consumed 1.004s CPU time. Sep 13 00:09:56.462790 login[1807]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 13 00:09:56.463209 login[1806]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:09:56.473678 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:09:56.478296 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:09:56.481240 systemd-logind[1677]: New session 1 of user core. Sep 13 00:09:56.530318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:09:56.535355 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:09:56.583186 (systemd)[1838]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:09:56.914218 waagent[1804]: 2025-09-13T00:09:56.911053Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 13 00:09:56.914827 waagent[1804]: 2025-09-13T00:09:56.914752Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 13 00:09:56.917338 waagent[1804]: 2025-09-13T00:09:56.917268Z INFO Daemon Daemon Python: 3.11.9 Sep 13 00:09:56.919836 waagent[1804]: 2025-09-13T00:09:56.919760Z INFO Daemon Daemon Run daemon Sep 13 00:09:56.922002 waagent[1804]: 2025-09-13T00:09:56.921950Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 13 00:09:56.925904 waagent[1804]: 2025-09-13T00:09:56.925829Z INFO Daemon Daemon Using waagent for provisioning Sep 13 00:09:56.928463 waagent[1804]: 2025-09-13T00:09:56.928411Z INFO Daemon Daemon Activate resource disk Sep 13 00:09:56.930586 waagent[1804]: 2025-09-13T00:09:56.930531Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 00:09:56.941172 waagent[1804]: 2025-09-13T00:09:56.939246Z INFO Daemon Daemon Found device: None Sep 13 00:09:56.941485 waagent[1804]: 2025-09-13T00:09:56.941431Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 00:09:56.945048 waagent[1804]: 2025-09-13T00:09:56.944971Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 00:09:56.952085 waagent[1804]: 2025-09-13T00:09:56.952006Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:09:56.954717 waagent[1804]: 2025-09-13T00:09:56.954658Z INFO Daemon Daemon Running default provisioning handler Sep 13 00:09:56.979221 waagent[1804]: 2025-09-13T00:09:56.966782Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 13 00:09:56.979221 waagent[1804]: 2025-09-13T00:09:56.969668Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:09:56.979221 waagent[1804]: 2025-09-13T00:09:56.970289Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:09:56.979221 waagent[1804]: 2025-09-13T00:09:56.971015Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 00:09:56.994750 systemd[1838]: Queued start job for default target default.target. Sep 13 00:09:57.003170 systemd[1838]: Created slice app.slice - User Application Slice. Sep 13 00:09:57.003209 systemd[1838]: Reached target paths.target - Paths. Sep 13 00:09:57.003227 systemd[1838]: Reached target timers.target - Timers. Sep 13 00:09:57.004555 systemd[1838]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:09:57.024994 systemd[1838]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:09:57.025348 systemd[1838]: Reached target sockets.target - Sockets. Sep 13 00:09:57.025493 systemd[1838]: Reached target basic.target - Basic System. Sep 13 00:09:57.025765 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:09:57.027549 systemd[1838]: Reached target default.target - Main User Target. Sep 13 00:09:57.027618 systemd[1838]: Startup finished in 437ms. Sep 13 00:09:57.032192 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:09:57.070599 waagent[1804]: 2025-09-13T00:09:57.068302Z INFO Daemon Daemon Successfully mounted dvd Sep 13 00:09:57.096632 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 00:09:57.098875 waagent[1804]: 2025-09-13T00:09:57.098792Z INFO Daemon Daemon Detect protocol endpoint Sep 13 00:09:57.101458 waagent[1804]: 2025-09-13T00:09:57.101387Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:09:57.106666 waagent[1804]: 2025-09-13T00:09:57.102941Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 00:09:57.106666 waagent[1804]: 2025-09-13T00:09:57.103683Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 00:09:57.106666 waagent[1804]: 2025-09-13T00:09:57.104749Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 00:09:57.106666 waagent[1804]: 2025-09-13T00:09:57.105434Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 00:09:57.132739 waagent[1804]: 2025-09-13T00:09:57.131254Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 00:09:57.133142 waagent[1804]: 2025-09-13T00:09:57.133113Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 00:09:57.134078 waagent[1804]: 2025-09-13T00:09:57.134040Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 00:09:57.306157 waagent[1804]: 2025-09-13T00:09:57.306056Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 00:09:57.309625 waagent[1804]: 2025-09-13T00:09:57.309557Z INFO Daemon Daemon Forcing an update of the goal state. Sep 13 00:09:57.316168 waagent[1804]: 2025-09-13T00:09:57.316109Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:09:57.378775 waagent[1804]: 2025-09-13T00:09:57.378693Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 13 00:09:57.393125 waagent[1804]: 2025-09-13T00:09:57.380544Z INFO Daemon Sep 13 00:09:57.393125 waagent[1804]: 2025-09-13T00:09:57.382060Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4eb32fa5-a03c-49ab-89c0-0b8fccdc8207 eTag: 3817329802429031655 source: Fabric] Sep 13 00:09:57.393125 waagent[1804]: 2025-09-13T00:09:57.383427Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 13 00:09:57.393125 waagent[1804]: 2025-09-13T00:09:57.384729Z INFO Daemon Sep 13 00:09:57.393125 waagent[1804]: 2025-09-13T00:09:57.384984Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:09:57.393125 waagent[1804]: 2025-09-13T00:09:57.389621Z INFO Daemon Daemon Downloading artifacts profile blob Sep 13 00:09:57.465188 login[1807]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:09:57.471425 systemd-logind[1677]: New session 2 of user core. Sep 13 00:09:57.477200 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:09:57.524046 waagent[1804]: 2025-09-13T00:09:57.522252Z INFO Daemon Downloaded certificate {'thumbprint': '8F4AF050F17C2B588E341D6D3DA0403B12222AF0', 'hasPrivateKey': True} Sep 13 00:09:57.524202 waagent[1804]: 2025-09-13T00:09:57.523997Z INFO Daemon Fetch goal state completed Sep 13 00:09:57.562390 waagent[1804]: 2025-09-13T00:09:57.562241Z INFO Daemon Daemon Starting provisioning Sep 13 00:09:57.568568 waagent[1804]: 2025-09-13T00:09:57.563429Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 00:09:57.568568 waagent[1804]: 2025-09-13T00:09:57.563950Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-e49e858a9f] Sep 13 00:09:57.591724 waagent[1804]: 2025-09-13T00:09:57.591636Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-e49e858a9f] Sep 13 00:09:57.594433 waagent[1804]: 2025-09-13T00:09:57.593050Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 00:09:57.594433 waagent[1804]: 2025-09-13T00:09:57.593842Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 00:09:57.623447 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:57.623459 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:57.623514 systemd-networkd[1436]: eth0: DHCP lease lost Sep 13 00:09:57.625252 waagent[1804]: 2025-09-13T00:09:57.625144Z INFO Daemon Daemon Create user account if not exists Sep 13 00:09:57.639850 waagent[1804]: 2025-09-13T00:09:57.626458Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 00:09:57.639850 waagent[1804]: 2025-09-13T00:09:57.627196Z INFO Daemon Daemon Configure sudoer Sep 13 00:09:57.639850 waagent[1804]: 2025-09-13T00:09:57.627965Z INFO Daemon Daemon Configure sshd Sep 13 00:09:57.639850 waagent[1804]: 2025-09-13T00:09:57.629108Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 13 00:09:57.639850 waagent[1804]: 2025-09-13T00:09:57.629906Z INFO Daemon Daemon Deploy ssh public key. Sep 13 00:09:57.640139 systemd-networkd[1436]: eth0: DHCPv6 lease lost Sep 13 00:09:57.679127 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 13 00:09:58.759741 waagent[1804]: 2025-09-13T00:09:58.757518Z INFO Daemon Daemon Provisioning complete Sep 13 00:09:58.773497 waagent[1804]: 2025-09-13T00:09:58.773423Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 00:09:58.776325 waagent[1804]: 2025-09-13T00:09:58.776255Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 00:09:58.780562 waagent[1804]: 2025-09-13T00:09:58.780497Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 13 00:09:59.643724 waagent[1886]: 2025-09-13T00:09:59.643624Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 13 00:09:59.644219 waagent[1886]: 2025-09-13T00:09:59.643799Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 13 00:09:59.644219 waagent[1886]: 2025-09-13T00:09:59.643879Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 13 00:09:59.695261 waagent[1886]: 2025-09-13T00:09:59.695170Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:09:59.695496 waagent[1886]: 2025-09-13T00:09:59.695448Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:09:59.695588 waagent[1886]: 2025-09-13T00:09:59.695546Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:09:59.703510 waagent[1886]: 2025-09-13T00:09:59.703431Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:09:59.709501 waagent[1886]: 2025-09-13T00:09:59.709434Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 00:09:59.710036 waagent[1886]: 2025-09-13T00:09:59.709962Z INFO ExtHandler Sep 13 00:09:59.710131 waagent[1886]: 2025-09-13T00:09:59.710074Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9e467b60-6218-4276-9f09-7d4d5a9185bd eTag: 3817329802429031655 source: Fabric] Sep 13 00:09:59.710452 waagent[1886]: 2025-09-13T00:09:59.710400Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 00:09:59.731628 waagent[1886]: 2025-09-13T00:09:59.731474Z INFO ExtHandler Sep 13 00:09:59.731798 waagent[1886]: 2025-09-13T00:09:59.731736Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:09:59.737084 waagent[1886]: 2025-09-13T00:09:59.737004Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 00:09:59.966625 waagent[1886]: 2025-09-13T00:09:59.966497Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F4AF050F17C2B588E341D6D3DA0403B12222AF0', 'hasPrivateKey': True} Sep 13 00:09:59.967398 waagent[1886]: 2025-09-13T00:09:59.967320Z INFO ExtHandler Fetch goal state completed Sep 13 00:09:59.983042 waagent[1886]: 2025-09-13T00:09:59.982960Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1886 Sep 13 00:09:59.983230 waagent[1886]: 2025-09-13T00:09:59.983176Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 13 00:09:59.985087 waagent[1886]: 2025-09-13T00:09:59.985002Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:09:59.985462 waagent[1886]: 2025-09-13T00:09:59.985412Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 00:10:00.198319 waagent[1886]: 2025-09-13T00:10:00.198260Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:10:00.198607 waagent[1886]: 2025-09-13T00:10:00.198547Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:10:00.205873 waagent[1886]: 2025-09-13T00:10:00.205827Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:10:00.213483 systemd[1]: Reloading requested from client PID 1899 ('systemctl') (unit waagent.service)... Sep 13 00:10:00.213503 systemd[1]: Reloading... Sep 13 00:10:00.300079 zram_generator::config[1929]: No configuration found. Sep 13 00:10:00.436955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:00.515141 systemd[1]: Reloading finished in 301 ms. Sep 13 00:10:00.540899 waagent[1886]: 2025-09-13T00:10:00.540792Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 13 00:10:00.549228 systemd[1]: Reloading requested from client PID 1990 ('systemctl') (unit waagent.service)... Sep 13 00:10:00.549246 systemd[1]: Reloading... Sep 13 00:10:00.644062 zram_generator::config[2028]: No configuration found. Sep 13 00:10:00.760340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:00.839299 systemd[1]: Reloading finished in 289 ms. Sep 13 00:10:00.866423 waagent[1886]: 2025-09-13T00:10:00.865344Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 13 00:10:00.866423 waagent[1886]: 2025-09-13T00:10:00.865560Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 13 00:10:01.393538 waagent[1886]: 2025-09-13T00:10:01.393434Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 13 00:10:01.394223 waagent[1886]: 2025-09-13T00:10:01.394152Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 13 00:10:01.395137 waagent[1886]: 2025-09-13T00:10:01.395067Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:10:01.395303 waagent[1886]: 2025-09-13T00:10:01.395242Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:10:01.396095 waagent[1886]: 2025-09-13T00:10:01.395993Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:10:01.396332 waagent[1886]: 2025-09-13T00:10:01.396276Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:10:01.396871 waagent[1886]: 2025-09-13T00:10:01.396774Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:10:01.396977 waagent[1886]: 2025-09-13T00:10:01.396925Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:10:01.397441 waagent[1886]: 2025-09-13T00:10:01.397387Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:10:01.397525 waagent[1886]: 2025-09-13T00:10:01.397485Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:10:01.397617 waagent[1886]: 2025-09-13T00:10:01.397579Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:10:01.397797 waagent[1886]: 2025-09-13T00:10:01.397753Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:10:01.397877 waagent[1886]: 2025-09-13T00:10:01.397839Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:10:01.397949 waagent[1886]: 2025-09-13T00:10:01.397916Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:10:01.398215 waagent[1886]: 2025-09-13T00:10:01.398164Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:10:01.398391 waagent[1886]: 2025-09-13T00:10:01.398335Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:10:01.398691 waagent[1886]: 2025-09-13T00:10:01.398636Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:10:01.399236 waagent[1886]: 2025-09-13T00:10:01.399188Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:10:01.399236 waagent[1886]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:10:01.399236 waagent[1886]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:10:01.399236 waagent[1886]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:10:01.399236 waagent[1886]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:10:01.399236 waagent[1886]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:10:01.399236 waagent[1886]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:10:01.409779 waagent[1886]: 2025-09-13T00:10:01.409719Z INFO ExtHandler ExtHandler Sep 13 00:10:01.409914 waagent[1886]: 2025-09-13T00:10:01.409845Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 01eeb764-1f68-49e2-bb4f-2d071b606aaf correlation 6d253cb8-9976-46a6-b226-89a920ce77cf created: 2025-09-13T00:08:38.132835Z] Sep 13 00:10:01.410342 waagent[1886]: 2025-09-13T00:10:01.410289Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 00:10:01.410901 waagent[1886]: 2025-09-13T00:10:01.410856Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 13 00:10:01.458925 waagent[1886]: 2025-09-13T00:10:01.458839Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9F225AC5-EB42-445F-AC6B-7288B8FD1F97;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 13 00:10:01.559722 waagent[1886]: 2025-09-13T00:10:01.559604Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:10:01.559722 waagent[1886]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:10:01.559722 waagent[1886]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:10:01.559722 waagent[1886]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:74:68:e5 brd ff:ff:ff:ff:ff:ff Sep 13 00:10:01.559722 waagent[1886]: 3: enP24280s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:74:68:e5 brd ff:ff:ff:ff:ff:ff\ altname enP24280p0s2 Sep 13 00:10:01.559722 waagent[1886]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:10:01.559722 waagent[1886]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:10:01.559722 waagent[1886]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:10:01.559722 waagent[1886]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:10:01.559722 waagent[1886]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 13 00:10:01.559722 waagent[1886]: 2: eth0 inet6 fe80::7eed:8dff:fe74:68e5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 13 00:10:01.622141 waagent[1886]: 2025-09-13T00:10:01.621976Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 13 00:10:01.622141 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:10:01.622141 waagent[1886]: pkts bytes target prot opt in out source destination Sep 13 00:10:01.622141 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:10:01.622141 waagent[1886]: pkts bytes target prot opt in out source destination Sep 13 00:10:01.622141 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:10:01.622141 waagent[1886]: pkts bytes target prot opt in out source destination Sep 13 00:10:01.622141 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:10:01.622141 waagent[1886]: 5 457 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:10:01.622141 waagent[1886]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:10:01.626055 waagent[1886]: 2025-09-13T00:10:01.625966Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 13 00:10:01.626055 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:10:01.626055 waagent[1886]: pkts bytes target prot opt in out source destination Sep 13 00:10:01.626055 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:10:01.626055 waagent[1886]: pkts bytes target prot opt in out source destination Sep 13 00:10:01.626055 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:10:01.626055 waagent[1886]: pkts bytes target prot opt in out source destination Sep 13 00:10:01.626055 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:10:01.626055 waagent[1886]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:10:01.626055 waagent[1886]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:10:01.626978 waagent[1886]: 2025-09-13T00:10:01.626618Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 00:10:06.423485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:10:06.434497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:06.567953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:06.573765 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:07.242720 kubelet[2120]: E0913 00:10:07.242653 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:07.246423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:07.246642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:17.423973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:10:17.430301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:17.601770 chronyd[1676]: Selected source PHC0 Sep 13 00:10:17.789166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:17.800424 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:18.200843 kubelet[2135]: E0913 00:10:18.200758 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:18.203336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:18.203553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:25.081548 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:10:25.089415 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.16.10:50290.service - OpenSSH per-connection server daemon (10.200.16.10:50290). Sep 13 00:10:25.764345 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 50290 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:25.766242 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:25.772318 systemd-logind[1677]: New session 3 of user core. Sep 13 00:10:25.779247 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:10:26.371351 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.16.10:50302.service - OpenSSH per-connection server daemon (10.200.16.10:50302). Sep 13 00:10:26.992109 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 50302 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:26.993897 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:27.000079 systemd-logind[1677]: New session 4 of user core. Sep 13 00:10:27.009183 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:10:27.437573 sshd[2148]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:27.441908 systemd[1]: sshd@1-10.200.8.10:22-10.200.16.10:50302.service: Deactivated successfully. Sep 13 00:10:27.445341 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:10:27.446275 systemd-logind[1677]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:10:27.447384 systemd-logind[1677]: Removed session 4. Sep 13 00:10:27.556865 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.16.10:50312.service - OpenSSH per-connection server daemon (10.200.16.10:50312). Sep 13 00:10:28.193638 sshd[2155]: Accepted publickey for core from 10.200.16.10 port 50312 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:28.195250 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:28.200430 systemd-logind[1677]: New session 5 of user core. Sep 13 00:10:28.206283 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:10:28.209101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:10:28.216677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:28.593266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:28.604436 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:28.643632 kubelet[2167]: E0913 00:10:28.643568 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:28.646193 sshd[2155]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:28.647512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:28.648228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:28.650797 systemd[1]: sshd@2-10.200.8.10:22-10.200.16.10:50312.service: Deactivated successfully. Sep 13 00:10:28.652795 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:10:28.653619 systemd-logind[1677]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:10:28.654535 systemd-logind[1677]: Removed session 5. Sep 13 00:10:28.759222 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.16.10:50318.service - OpenSSH per-connection server daemon (10.200.16.10:50318). Sep 13 00:10:29.381141 sshd[2177]: Accepted publickey for core from 10.200.16.10 port 50318 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:29.383139 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:29.388160 systemd-logind[1677]: New session 6 of user core. Sep 13 00:10:29.394256 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:10:29.834642 sshd[2177]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:29.838124 systemd[1]: sshd@3-10.200.8.10:22-10.200.16.10:50318.service: Deactivated successfully. Sep 13 00:10:29.840581 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:10:29.842477 systemd-logind[1677]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:10:29.843647 systemd-logind[1677]: Removed session 6. Sep 13 00:10:29.945714 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.16.10:59310.service - OpenSSH per-connection server daemon (10.200.16.10:59310). Sep 13 00:10:30.581986 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 59310 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:30.583505 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:30.589543 systemd-logind[1677]: New session 7 of user core. Sep 13 00:10:30.596250 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:10:31.134881 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:10:31.135390 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:31.173636 sudo[2187]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:31.274732 sshd[2184]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:31.279329 systemd[1]: sshd@4-10.200.8.10:22-10.200.16.10:59310.service: Deactivated successfully. Sep 13 00:10:31.281484 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:10:31.282410 systemd-logind[1677]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:10:31.283457 systemd-logind[1677]: Removed session 7. Sep 13 00:10:31.384357 systemd[1]: Started sshd@5-10.200.8.10:22-10.200.16.10:59316.service - OpenSSH per-connection server daemon (10.200.16.10:59316). Sep 13 00:10:32.010678 sshd[2192]: Accepted publickey for core from 10.200.16.10 port 59316 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:32.012327 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:32.017494 systemd-logind[1677]: New session 8 of user core. Sep 13 00:10:32.023224 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:10:32.354277 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:10:32.354901 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:32.358279 sudo[2196]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:32.363380 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:10:32.363737 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:32.376402 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:32.379132 auditctl[2199]: No rules Sep 13 00:10:32.379512 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:10:32.379727 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:32.382706 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:10:32.424963 augenrules[2217]: No rules Sep 13 00:10:32.426534 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:10:32.427843 sudo[2195]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:32.528731 sshd[2192]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:32.532311 systemd[1]: sshd@5-10.200.8.10:22-10.200.16.10:59316.service: Deactivated successfully. Sep 13 00:10:32.534395 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:10:32.535953 systemd-logind[1677]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:10:32.536865 systemd-logind[1677]: Removed session 8. Sep 13 00:10:32.640342 systemd[1]: Started sshd@6-10.200.8.10:22-10.200.16.10:59324.service - OpenSSH per-connection server daemon (10.200.16.10:59324). Sep 13 00:10:33.259453 sshd[2225]: Accepted publickey for core from 10.200.16.10 port 59324 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:10:33.261065 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:33.265080 systemd-logind[1677]: New session 9 of user core. Sep 13 00:10:33.274216 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:10:33.603347 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:10:33.603722 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:33.927368 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 13 00:10:36.625469 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:10:36.627439 (dockerd)[2243]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:10:38.673590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:10:38.680379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:39.453044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:39.465380 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:39.506805 kubelet[2256]: E0913 00:10:39.506747 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:39.509046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:39.509264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:39.515926 update_engine[1678]: I20250913 00:10:39.515872 1678 update_attempter.cc:509] Updating boot flags... Sep 13 00:10:39.650494 dockerd[2243]: time="2025-09-13T00:10:39.650204933Z" level=info msg="Starting up" Sep 13 00:10:39.915690 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2282) Sep 13 00:10:40.090053 systemd[1]: var-lib-docker-metacopy\x2dcheck3164061753-merged.mount: Deactivated successfully. Sep 13 00:10:40.112350 dockerd[2243]: time="2025-09-13T00:10:40.112295785Z" level=info msg="Loading containers: start." Sep 13 00:10:40.352189 kernel: Initializing XFRM netlink socket Sep 13 00:10:40.573988 systemd-networkd[1436]: docker0: Link UP Sep 13 00:10:40.597714 dockerd[2243]: time="2025-09-13T00:10:40.597632081Z" level=info msg="Loading containers: done." Sep 13 00:10:40.653471 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4111891793-merged.mount: Deactivated successfully. Sep 13 00:10:40.660164 dockerd[2243]: time="2025-09-13T00:10:40.660117708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:10:40.660771 dockerd[2243]: time="2025-09-13T00:10:40.660255810Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:10:40.660771 dockerd[2243]: time="2025-09-13T00:10:40.660405612Z" level=info msg="Daemon has completed initialization" Sep 13 00:10:40.724521 dockerd[2243]: time="2025-09-13T00:10:40.724112357Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:10:40.724556 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:10:41.840578 containerd[1691]: time="2025-09-13T00:10:41.840531515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:10:42.709605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933951832.mount: Deactivated successfully. Sep 13 00:10:44.576213 containerd[1691]: time="2025-09-13T00:10:44.576160109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:44.578393 containerd[1691]: time="2025-09-13T00:10:44.578324343Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 13 00:10:44.581034 containerd[1691]: time="2025-09-13T00:10:44.580959785Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:44.589362 containerd[1691]: time="2025-09-13T00:10:44.589305517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:44.590510 containerd[1691]: time="2025-09-13T00:10:44.590463235Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.74988972s" Sep 13 00:10:44.590510 containerd[1691]: time="2025-09-13T00:10:44.590506036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:10:44.591776 containerd[1691]: time="2025-09-13T00:10:44.591748055Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:10:46.492422 containerd[1691]: time="2025-09-13T00:10:46.492359935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:46.495006 containerd[1691]: time="2025-09-13T00:10:46.494806873Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 13 00:10:46.497621 containerd[1691]: time="2025-09-13T00:10:46.497546917Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:46.502062 containerd[1691]: time="2025-09-13T00:10:46.501973287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:46.503176 containerd[1691]: time="2025-09-13T00:10:46.502997603Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.911214847s" Sep 13 00:10:46.503176 containerd[1691]: time="2025-09-13T00:10:46.503059804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:10:46.503988 containerd[1691]: time="2025-09-13T00:10:46.503954718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:10:48.038743 containerd[1691]: time="2025-09-13T00:10:48.038681807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.042101 containerd[1691]: time="2025-09-13T00:10:48.042037960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 13 00:10:48.045134 containerd[1691]: time="2025-09-13T00:10:48.045095008Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.049829 containerd[1691]: time="2025-09-13T00:10:48.049760782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:48.051253 containerd[1691]: time="2025-09-13T00:10:48.051059103Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.547067584s" Sep 13 00:10:48.051253 containerd[1691]: time="2025-09-13T00:10:48.051102603Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:10:48.051986 containerd[1691]: time="2025-09-13T00:10:48.051814914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:10:49.244807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628386960.mount: Deactivated successfully. Sep 13 00:10:49.674579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:10:49.679770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:49.839248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:49.851475 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:50.452224 kubelet[2512]: E0913 00:10:50.452125 2512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:50.454739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:50.454955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:50.470072 containerd[1691]: time="2025-09-13T00:10:50.469998023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:50.473530 containerd[1691]: time="2025-09-13T00:10:50.473465080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 13 00:10:50.476739 containerd[1691]: time="2025-09-13T00:10:50.476670732Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:50.481573 containerd[1691]: time="2025-09-13T00:10:50.481507911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:50.482418 containerd[1691]: time="2025-09-13T00:10:50.482237623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.430387608s" Sep 13 00:10:50.482418 containerd[1691]: time="2025-09-13T00:10:50.482282824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:10:50.483244 containerd[1691]: time="2025-09-13T00:10:50.483055237Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:10:51.034860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326320559.mount: Deactivated successfully. Sep 13 00:10:52.321589 containerd[1691]: time="2025-09-13T00:10:52.321531815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.323774 containerd[1691]: time="2025-09-13T00:10:52.323713750Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 13 00:10:52.326389 containerd[1691]: time="2025-09-13T00:10:52.326350593Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.331860 containerd[1691]: time="2025-09-13T00:10:52.331532878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.332630 containerd[1691]: time="2025-09-13T00:10:52.332589396Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.849494858s" Sep 13 00:10:52.332722 containerd[1691]: time="2025-09-13T00:10:52.332635096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:10:52.333335 containerd[1691]: time="2025-09-13T00:10:52.333306507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:10:52.859778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753753470.mount: Deactivated successfully. Sep 13 00:10:52.908067 containerd[1691]: time="2025-09-13T00:10:52.907985109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.911341 containerd[1691]: time="2025-09-13T00:10:52.911129861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 13 00:10:52.916243 containerd[1691]: time="2025-09-13T00:10:52.914963623Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.919714 containerd[1691]: time="2025-09-13T00:10:52.918844187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:52.919714 containerd[1691]: time="2025-09-13T00:10:52.919519998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 586.17389ms" Sep 13 00:10:52.919714 containerd[1691]: time="2025-09-13T00:10:52.919561899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:10:52.920415 containerd[1691]: time="2025-09-13T00:10:52.920382012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:10:53.555398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040251692.mount: Deactivated successfully. Sep 13 00:10:56.047064 containerd[1691]: time="2025-09-13T00:10:56.046986064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.049738 containerd[1691]: time="2025-09-13T00:10:56.049513305Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 13 00:10:56.054049 containerd[1691]: time="2025-09-13T00:10:56.052951962Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.058630 containerd[1691]: time="2025-09-13T00:10:56.058581054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.059989 containerd[1691]: time="2025-09-13T00:10:56.059941076Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.139523964s" Sep 13 00:10:56.060122 containerd[1691]: time="2025-09-13T00:10:56.059992177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:10:58.682096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:58.689358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:58.719124 systemd[1]: Reloading requested from client PID 2656 ('systemctl') (unit session-9.scope)... Sep 13 00:10:58.719140 systemd[1]: Reloading... Sep 13 00:10:58.858054 zram_generator::config[2699]: No configuration found. Sep 13 00:10:58.986862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:59.085077 systemd[1]: Reloading finished in 365 ms. Sep 13 00:10:59.136446 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:10:59.136554 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:10:59.136860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:59.142543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:59.504941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:59.515440 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:11:00.257071 kubelet[2766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:00.257071 kubelet[2766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:11:00.257071 kubelet[2766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:00.257071 kubelet[2766]: I0913 00:11:00.256609 2766 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:11:00.458989 kubelet[2766]: I0913 00:11:00.458939 2766 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:11:00.458989 kubelet[2766]: I0913 00:11:00.458972 2766 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:11:00.459359 kubelet[2766]: I0913 00:11:00.459335 2766 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:11:00.497670 kubelet[2766]: I0913 00:11:00.497554 2766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:11:00.498719 kubelet[2766]: E0913 00:11:00.498410 2766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:00.504956 kubelet[2766]: E0913 00:11:00.504921 2766 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:11:00.504956 kubelet[2766]: I0913 00:11:00.504950 2766 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:11:00.508317 kubelet[2766]: I0913 00:11:00.508219 2766 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:11:00.508947 kubelet[2766]: I0913 00:11:00.508469 2766 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:11:00.509076 kubelet[2766]: I0913 00:11:00.508507 2766 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-e49e858a9f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:11:00.509076 kubelet[2766]: I0913 00:11:00.508987 2766 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:11:00.509076 kubelet[2766]: I0913 00:11:00.509010 2766 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:11:00.509304 kubelet[2766]: I0913 00:11:00.509180 2766 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:00.512283 kubelet[2766]: I0913 00:11:00.512257 2766 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:11:00.512384 kubelet[2766]: I0913 00:11:00.512301 2766 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:11:00.512384 kubelet[2766]: I0913 00:11:00.512329 2766 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:11:00.512384 kubelet[2766]: I0913 00:11:00.512345 2766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:11:00.516785 kubelet[2766]: W0913 00:11:00.516225 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:00.516785 kubelet[2766]: E0913 00:11:00.516290 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:00.516785 kubelet[2766]: W0913 00:11:00.516696 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-e49e858a9f&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:00.516785 kubelet[2766]: E0913 00:11:00.516750 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-e49e858a9f&limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:00.519410 kubelet[2766]: I0913 00:11:00.518418 2766 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:11:00.519410 kubelet[2766]: I0913 00:11:00.518923 2766 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:11:00.519410 kubelet[2766]: W0913 00:11:00.518995 2766 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:11:00.522098 kubelet[2766]: I0913 00:11:00.522069 2766 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:11:00.522199 kubelet[2766]: I0913 00:11:00.522114 2766 server.go:1287] "Started kubelet" Sep 13 00:11:00.527036 kubelet[2766]: I0913 00:11:00.526468 2766 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:11:00.528292 kubelet[2766]: I0913 00:11:00.527573 2766 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:11:00.530771 kubelet[2766]: I0913 00:11:00.530751 2766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:11:00.532237 kubelet[2766]: I0913 00:11:00.530996 2766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:11:00.532237 kubelet[2766]: I0913 00:11:00.531295 2766 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:11:00.535217 kubelet[2766]: E0913 00:11:00.533802 2766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-e49e858a9f.1864af1548373f2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-e49e858a9f,UID:ci-4081.3.5-n-e49e858a9f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-e49e858a9f,},FirstTimestamp:2025-09-13 00:11:00.522090285 +0000 UTC m=+1.002571131,LastTimestamp:2025-09-13 00:11:00.522090285 +0000 UTC m=+1.002571131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-e49e858a9f,}" Sep 13 00:11:00.536371 kubelet[2766]: I0913 00:11:00.536343 2766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:11:00.538969 kubelet[2766]: I0913 00:11:00.538953 2766 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:11:00.540541 kubelet[2766]: E0913 00:11:00.540448 2766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:00.541576 kubelet[2766]: E0913 00:11:00.541547 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-e49e858a9f?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="200ms" Sep 13 00:11:00.542167 kubelet[2766]: I0913 00:11:00.542147 2766 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:11:00.542247 kubelet[2766]: I0913 00:11:00.542204 2766 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:11:00.542650 kubelet[2766]: W0913 00:11:00.542604 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:00.542732 kubelet[2766]: E0913 00:11:00.542668 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:00.542868 kubelet[2766]: E0913 00:11:00.542847 2766 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:11:00.543377 kubelet[2766]: I0913 00:11:00.543154 2766 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:11:00.543377 kubelet[2766]: I0913 00:11:00.543271 2766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:11:00.545511 kubelet[2766]: I0913 00:11:00.545487 2766 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:11:00.574955 kubelet[2766]: I0913 00:11:00.574923 2766 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:11:00.574955 kubelet[2766]: I0913 00:11:00.574942 2766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:11:00.574955 kubelet[2766]: I0913 00:11:00.574964 2766 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:00.580013 kubelet[2766]: I0913 00:11:00.579978 2766 policy_none.go:49] "None policy: Start" Sep 13 00:11:00.580013 kubelet[2766]: I0913 00:11:00.580008 2766 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:11:00.580206 kubelet[2766]: I0913 00:11:00.580040 2766 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:11:00.588268 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:11:00.599681 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:11:00.603265 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:11:00.612405 kubelet[2766]: I0913 00:11:00.612342 2766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:11:00.615732 kubelet[2766]: I0913 00:11:00.615694 2766 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:11:00.615964 kubelet[2766]: I0913 00:11:00.615943 2766 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:11:00.616106 kubelet[2766]: I0913 00:11:00.615962 2766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:11:00.616827 kubelet[2766]: I0913 00:11:00.616795 2766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:11:00.618680 kubelet[2766]: I0913 00:11:00.617781 2766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:11:00.618680 kubelet[2766]: I0913 00:11:00.617835 2766 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:11:00.618680 kubelet[2766]: I0913 00:11:00.617861 2766 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:11:00.618680 kubelet[2766]: I0913 00:11:00.617869 2766 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:11:00.618680 kubelet[2766]: E0913 00:11:00.617923 2766 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:11:00.619241 kubelet[2766]: W0913 00:11:00.618969 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:00.619241 kubelet[2766]: E0913 00:11:00.619075 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:00.620742 kubelet[2766]: E0913 00:11:00.620715 2766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:11:00.620827 kubelet[2766]: E0913 00:11:00.620769 2766 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:00.718912 kubelet[2766]: I0913 00:11:00.718871 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.720433 kubelet[2766]: E0913 00:11:00.720135 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.730054 systemd[1]: Created slice kubepods-burstable-podbc5d90fe91c2b6aa3c2e879543ffa210.slice - libcontainer container kubepods-burstable-podbc5d90fe91c2b6aa3c2e879543ffa210.slice. Sep 13 00:11:00.737793 kubelet[2766]: E0913 00:11:00.737758 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.741315 systemd[1]: Created slice kubepods-burstable-pod89e2157129dc4912c118891ce2a7a9aa.slice - libcontainer container kubepods-burstable-pod89e2157129dc4912c118891ce2a7a9aa.slice. Sep 13 00:11:00.742634 kubelet[2766]: E0913 00:11:00.742254 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-e49e858a9f?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="400ms" Sep 13 00:11:00.749664 kubelet[2766]: E0913 00:11:00.749438 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.751602 systemd[1]: Created slice kubepods-burstable-pod7516f3f23e7f086c46b88e2f88e2964d.slice - libcontainer container kubepods-burstable-pod7516f3f23e7f086c46b88e2f88e2964d.slice. Sep 13 00:11:00.753598 kubelet[2766]: E0913 00:11:00.753567 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845080 kubelet[2766]: I0913 00:11:00.842995 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7516f3f23e7f086c46b88e2f88e2964d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-e49e858a9f\" (UID: \"7516f3f23e7f086c46b88e2f88e2964d\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845080 kubelet[2766]: I0913 00:11:00.843153 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d90fe91c2b6aa3c2e879543ffa210-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" (UID: \"bc5d90fe91c2b6aa3c2e879543ffa210\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845080 kubelet[2766]: I0913 00:11:00.843191 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845080 kubelet[2766]: I0913 00:11:00.843237 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845080 kubelet[2766]: I0913 00:11:00.843265 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845336 kubelet[2766]: I0913 00:11:00.843313 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d90fe91c2b6aa3c2e879543ffa210-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" (UID: \"bc5d90fe91c2b6aa3c2e879543ffa210\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845336 kubelet[2766]: I0913 00:11:00.843343 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc5d90fe91c2b6aa3c2e879543ffa210-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" (UID: \"bc5d90fe91c2b6aa3c2e879543ffa210\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845336 kubelet[2766]: I0913 00:11:00.843385 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.845336 kubelet[2766]: I0913 00:11:00.843412 2766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.922329 kubelet[2766]: I0913 00:11:00.922282 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:00.922753 kubelet[2766]: E0913 00:11:00.922718 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:01.039359 containerd[1691]: time="2025-09-13T00:11:01.039298810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-e49e858a9f,Uid:bc5d90fe91c2b6aa3c2e879543ffa210,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:01.051664 containerd[1691]: time="2025-09-13T00:11:01.051613735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-e49e858a9f,Uid:89e2157129dc4912c118891ce2a7a9aa,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:01.054966 containerd[1691]: time="2025-09-13T00:11:01.054596465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-e49e858a9f,Uid:7516f3f23e7f086c46b88e2f88e2964d,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:01.143665 kubelet[2766]: E0913 00:11:01.143474 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-e49e858a9f?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="800ms" Sep 13 00:11:01.324643 kubelet[2766]: I0913 00:11:01.324613 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:01.325261 kubelet[2766]: E0913 00:11:01.325043 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:01.662644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289324700.mount: Deactivated successfully. Sep 13 00:11:01.686662 containerd[1691]: time="2025-09-13T00:11:01.686596750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:01.689204 containerd[1691]: time="2025-09-13T00:11:01.689143376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 13 00:11:01.691776 containerd[1691]: time="2025-09-13T00:11:01.691732302Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:01.696107 containerd[1691]: time="2025-09-13T00:11:01.696061046Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:01.699136 containerd[1691]: time="2025-09-13T00:11:01.699085177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:11:01.702932 containerd[1691]: time="2025-09-13T00:11:01.702891215Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:01.705649 containerd[1691]: time="2025-09-13T00:11:01.705591042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:11:01.707809 kubelet[2766]: W0913 00:11:01.707753 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-e49e858a9f&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:01.707937 kubelet[2766]: E0913 00:11:01.707823 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-e49e858a9f&limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:01.714050 containerd[1691]: time="2025-09-13T00:11:01.712682114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:11:01.715846 containerd[1691]: time="2025-09-13T00:11:01.715801846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.108979ms" Sep 13 00:11:01.720605 containerd[1691]: time="2025-09-13T00:11:01.720513493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 681.123582ms" Sep 13 00:11:01.725960 containerd[1691]: time="2025-09-13T00:11:01.725887947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 674.196211ms" Sep 13 00:11:01.832998 kubelet[2766]: W0913 00:11:01.832954 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:01.832998 kubelet[2766]: E0913 00:11:01.833004 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:01.854397 kubelet[2766]: W0913 00:11:01.854351 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:01.854565 kubelet[2766]: E0913 00:11:01.854407 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:01.897375 kubelet[2766]: W0913 00:11:01.897300 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:01.897566 kubelet[2766]: E0913 00:11:01.897385 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:01.944365 kubelet[2766]: E0913 00:11:01.944312 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-e49e858a9f?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="1.6s" Sep 13 00:11:02.127193 kubelet[2766]: I0913 00:11:02.127156 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:02.127588 kubelet[2766]: E0913 00:11:02.127552 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:02.525849 kubelet[2766]: E0913 00:11:02.525801 2766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:03.440832 kubelet[2766]: W0913 00:11:03.440787 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-e49e858a9f&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:03.440993 kubelet[2766]: E0913 00:11:03.440842 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-e49e858a9f&limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:03.545649 kubelet[2766]: E0913 00:11:03.545595 2766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-e49e858a9f?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="3.2s" Sep 13 00:11:03.730636 kubelet[2766]: I0913 00:11:03.730160 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:03.730636 kubelet[2766]: E0913 00:11:03.730515 2766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:03.998386 kubelet[2766]: W0913 00:11:03.998237 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:03.998386 kubelet[2766]: E0913 00:11:03.998295 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:04.210217 kubelet[2766]: W0913 00:11:04.210164 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:04.210386 kubelet[2766]: E0913 00:11:04.210221 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:04.386820 containerd[1691]: time="2025-09-13T00:11:04.385228381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:04.386820 containerd[1691]: time="2025-09-13T00:11:04.385541686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:04.386820 containerd[1691]: time="2025-09-13T00:11:04.385567887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:04.388497 containerd[1691]: time="2025-09-13T00:11:04.388303332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:04.392218 containerd[1691]: time="2025-09-13T00:11:04.391637987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:04.392218 containerd[1691]: time="2025-09-13T00:11:04.391721688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:04.393698 containerd[1691]: time="2025-09-13T00:11:04.391744088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:04.398434 containerd[1691]: time="2025-09-13T00:11:04.398332997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:04.412462 containerd[1691]: time="2025-09-13T00:11:04.412233326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:04.412462 containerd[1691]: time="2025-09-13T00:11:04.412297327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:04.412462 containerd[1691]: time="2025-09-13T00:11:04.412313627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:04.412745 containerd[1691]: time="2025-09-13T00:11:04.412410829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:04.460683 systemd[1]: run-containerd-runc-k8s.io-9575898f59a689899563e79a90aff1f029701702efd3ef15998081b9b45b454c-runc.Davpt3.mount: Deactivated successfully. Sep 13 00:11:04.471870 systemd[1]: Started cri-containerd-9575898f59a689899563e79a90aff1f029701702efd3ef15998081b9b45b454c.scope - libcontainer container 9575898f59a689899563e79a90aff1f029701702efd3ef15998081b9b45b454c. Sep 13 00:11:04.481234 systemd[1]: Started cri-containerd-46ea1a466b731d5e681f046e682d12f42b9938080364846a68cc21b43e31f379.scope - libcontainer container 46ea1a466b731d5e681f046e682d12f42b9938080364846a68cc21b43e31f379. Sep 13 00:11:04.484218 systemd[1]: Started cri-containerd-fb9e1f7824c2eacb0ada7cf3558b7088c3b1e7b993e1c57ce78052679ea9dd97.scope - libcontainer container fb9e1f7824c2eacb0ada7cf3558b7088c3b1e7b993e1c57ce78052679ea9dd97. Sep 13 00:11:04.542700 containerd[1691]: time="2025-09-13T00:11:04.542590974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-e49e858a9f,Uid:bc5d90fe91c2b6aa3c2e879543ffa210,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ea1a466b731d5e681f046e682d12f42b9938080364846a68cc21b43e31f379\"" Sep 13 00:11:04.547665 containerd[1691]: time="2025-09-13T00:11:04.547583657Z" level=info msg="CreateContainer within sandbox \"46ea1a466b731d5e681f046e682d12f42b9938080364846a68cc21b43e31f379\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:11:04.580978 containerd[1691]: time="2025-09-13T00:11:04.580846205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-e49e858a9f,Uid:89e2157129dc4912c118891ce2a7a9aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9575898f59a689899563e79a90aff1f029701702efd3ef15998081b9b45b454c\"" Sep 13 00:11:04.586833 containerd[1691]: time="2025-09-13T00:11:04.586792803Z" level=info msg="CreateContainer within sandbox \"9575898f59a689899563e79a90aff1f029701702efd3ef15998081b9b45b454c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:11:04.587986 containerd[1691]: time="2025-09-13T00:11:04.587910321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-e49e858a9f,Uid:7516f3f23e7f086c46b88e2f88e2964d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb9e1f7824c2eacb0ada7cf3558b7088c3b1e7b993e1c57ce78052679ea9dd97\"" Sep 13 00:11:04.591151 containerd[1691]: time="2025-09-13T00:11:04.591095274Z" level=info msg="CreateContainer within sandbox \"fb9e1f7824c2eacb0ada7cf3558b7088c3b1e7b993e1c57ce78052679ea9dd97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:11:04.597561 kubelet[2766]: W0913 00:11:04.597517 2766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Sep 13 00:11:04.597889 kubelet[2766]: E0913 00:11:04.597572 2766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.10:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:11:05.365803 containerd[1691]: time="2025-09-13T00:11:05.365620838Z" level=info msg="CreateContainer within sandbox \"46ea1a466b731d5e681f046e682d12f42b9938080364846a68cc21b43e31f379\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78eb4547cb1e576c97a02a6ad5aaeac2e0f27927b6aae48dd914bfb2843330d3\"" Sep 13 00:11:05.366709 containerd[1691]: time="2025-09-13T00:11:05.366675055Z" level=info msg="StartContainer for \"78eb4547cb1e576c97a02a6ad5aaeac2e0f27927b6aae48dd914bfb2843330d3\"" Sep 13 00:11:05.404242 systemd[1]: Started cri-containerd-78eb4547cb1e576c97a02a6ad5aaeac2e0f27927b6aae48dd914bfb2843330d3.scope - libcontainer container 78eb4547cb1e576c97a02a6ad5aaeac2e0f27927b6aae48dd914bfb2843330d3. Sep 13 00:11:06.029267 containerd[1691]: time="2025-09-13T00:11:06.029078748Z" level=info msg="StartContainer for \"78eb4547cb1e576c97a02a6ad5aaeac2e0f27927b6aae48dd914bfb2843330d3\" returns successfully" Sep 13 00:11:06.121321 containerd[1691]: time="2025-09-13T00:11:06.121254112Z" level=info msg="CreateContainer within sandbox \"9575898f59a689899563e79a90aff1f029701702efd3ef15998081b9b45b454c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7351504f88ab6a452d4566a311fae5377cbe505b1ba06ad20d45073c08ce1c8d\"" Sep 13 00:11:06.122925 containerd[1691]: time="2025-09-13T00:11:06.122882140Z" level=info msg="StartContainer for \"7351504f88ab6a452d4566a311fae5377cbe505b1ba06ad20d45073c08ce1c8d\"" Sep 13 00:11:06.173336 containerd[1691]: time="2025-09-13T00:11:06.173284796Z" level=info msg="CreateContainer within sandbox \"fb9e1f7824c2eacb0ada7cf3558b7088c3b1e7b993e1c57ce78052679ea9dd97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b652591a1fec0697d75f4bb81190d09002eee1c78058acc9607d301cdfaee4c1\"" Sep 13 00:11:06.175261 systemd[1]: Started cri-containerd-7351504f88ab6a452d4566a311fae5377cbe505b1ba06ad20d45073c08ce1c8d.scope - libcontainer container 7351504f88ab6a452d4566a311fae5377cbe505b1ba06ad20d45073c08ce1c8d. Sep 13 00:11:06.176108 containerd[1691]: time="2025-09-13T00:11:06.175495033Z" level=info msg="StartContainer for \"b652591a1fec0697d75f4bb81190d09002eee1c78058acc9607d301cdfaee4c1\"" Sep 13 00:11:06.251835 systemd[1]: Started cri-containerd-b652591a1fec0697d75f4bb81190d09002eee1c78058acc9607d301cdfaee4c1.scope - libcontainer container b652591a1fec0697d75f4bb81190d09002eee1c78058acc9607d301cdfaee4c1. Sep 13 00:11:06.293498 containerd[1691]: time="2025-09-13T00:11:06.293341434Z" level=info msg="StartContainer for \"7351504f88ab6a452d4566a311fae5377cbe505b1ba06ad20d45073c08ce1c8d\" returns successfully" Sep 13 00:11:06.392686 containerd[1691]: time="2025-09-13T00:11:06.392511117Z" level=info msg="StartContainer for \"b652591a1fec0697d75f4bb81190d09002eee1c78058acc9607d301cdfaee4c1\" returns successfully" Sep 13 00:11:06.935120 kubelet[2766]: I0913 00:11:06.933712 2766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.042988 kubelet[2766]: E0913 00:11:07.042657 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.047005 kubelet[2766]: E0913 00:11:07.046322 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.048272 kubelet[2766]: E0913 00:11:07.048251 2766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.295999 kubelet[2766]: E0913 00:11:07.295841 2766 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-e49e858a9f\" not found" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.376083 kubelet[2766]: I0913 00:11:07.375335 2766 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.376083 kubelet[2766]: E0913 00:11:07.375389 2766 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-e49e858a9f\": node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:07.543689 kubelet[2766]: E0913 00:11:07.543465 2766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:07.643978 kubelet[2766]: E0913 00:11:07.643840 2766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:07.744720 kubelet[2766]: E0913 00:11:07.744665 2766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:07.940352 kubelet[2766]: I0913 00:11:07.940303 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.988128 kubelet[2766]: E0913 00:11:07.987742 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.988128 kubelet[2766]: I0913 00:11:07.987809 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.989871 kubelet[2766]: E0913 00:11:07.989833 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.989871 kubelet[2766]: I0913 00:11:07.989870 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:07.991831 kubelet[2766]: E0913 00:11:07.991793 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-e49e858a9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.045246 kubelet[2766]: I0913 00:11:08.045209 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.047074 kubelet[2766]: I0913 00:11:08.045729 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.048316 kubelet[2766]: I0913 00:11:08.048278 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.051626 kubelet[2766]: E0913 00:11:08.051588 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.052040 kubelet[2766]: E0913 00:11:08.051995 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-e49e858a9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.052552 kubelet[2766]: E0913 00:11:08.052525 2766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:08.520183 kubelet[2766]: I0913 00:11:08.520133 2766 apiserver.go:52] "Watching apiserver" Sep 13 00:11:08.542956 kubelet[2766]: I0913 00:11:08.542866 2766 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:11:09.047508 kubelet[2766]: I0913 00:11:09.047310 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:09.047508 kubelet[2766]: I0913 00:11:09.047364 2766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:09.057671 kubelet[2766]: W0913 00:11:09.057636 2766 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:11:09.061934 kubelet[2766]: W0913 00:11:09.061489 2766 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:11:09.678486 systemd[1]: Reloading requested from client PID 3037 ('systemctl') (unit session-9.scope)... Sep 13 00:11:09.678506 systemd[1]: Reloading... Sep 13 00:11:09.810197 zram_generator::config[3077]: No configuration found. Sep 13 00:11:09.950629 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:11:10.045724 systemd[1]: Reloading finished in 366 ms. Sep 13 00:11:10.086451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:10.104648 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:11:10.104894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:10.111418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:11:10.424630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:11:10.436490 (kubelet)[3144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:11:10.492790 kubelet[3144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:10.492790 kubelet[3144]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:11:10.492790 kubelet[3144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:11:10.493321 kubelet[3144]: I0913 00:11:10.492896 3144 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:11:10.503639 kubelet[3144]: I0913 00:11:10.501654 3144 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:11:10.503639 kubelet[3144]: I0913 00:11:10.501688 3144 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:11:10.503639 kubelet[3144]: I0913 00:11:10.502298 3144 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:11:10.510613 kubelet[3144]: I0913 00:11:10.510558 3144 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:11:10.513859 kubelet[3144]: I0913 00:11:10.513676 3144 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:11:10.521556 kubelet[3144]: E0913 00:11:10.521503 3144 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:11:10.521556 kubelet[3144]: I0913 00:11:10.521545 3144 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:11:10.525797 kubelet[3144]: I0913 00:11:10.525766 3144 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:11:10.526158 kubelet[3144]: I0913 00:11:10.526080 3144 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:11:10.526637 kubelet[3144]: I0913 00:11:10.526131 3144 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-e49e858a9f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:11:10.526637 kubelet[3144]: I0913 00:11:10.526389 3144 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:11:10.526637 kubelet[3144]: I0913 00:11:10.526403 3144 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:11:10.526637 kubelet[3144]: I0913 00:11:10.526464 3144 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:10.528268 kubelet[3144]: I0913 00:11:10.527117 3144 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:11:10.528268 kubelet[3144]: I0913 00:11:10.527662 3144 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:11:10.528268 kubelet[3144]: I0913 00:11:10.527694 3144 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:11:10.528268 kubelet[3144]: I0913 00:11:10.527708 3144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:11:10.530063 kubelet[3144]: I0913 00:11:10.530032 3144 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:11:10.530652 kubelet[3144]: I0913 00:11:10.530529 3144 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:11:10.533087 kubelet[3144]: I0913 00:11:10.533064 3144 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:11:10.533182 kubelet[3144]: I0913 00:11:10.533113 3144 server.go:1287] "Started kubelet" Sep 13 00:11:10.536764 kubelet[3144]: I0913 00:11:10.536602 3144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:11:10.541387 kubelet[3144]: I0913 00:11:10.541342 3144 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:11:10.547057 kubelet[3144]: I0913 00:11:10.546538 3144 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:11:10.549976 kubelet[3144]: I0913 00:11:10.549907 3144 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:11:10.550361 kubelet[3144]: I0913 00:11:10.550343 3144 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:11:10.551109 kubelet[3144]: I0913 00:11:10.551090 3144 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:11:10.555706 kubelet[3144]: I0913 00:11:10.554958 3144 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:11:10.558369 kubelet[3144]: E0913 00:11:10.558341 3144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-e49e858a9f\" not found" Sep 13 00:11:10.563390 kubelet[3144]: I0913 00:11:10.561620 3144 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:11:10.563390 kubelet[3144]: I0913 00:11:10.561779 3144 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:11:10.573040 kubelet[3144]: I0913 00:11:10.572973 3144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:11:10.575066 kubelet[3144]: I0913 00:11:10.574294 3144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:11:10.575066 kubelet[3144]: I0913 00:11:10.574331 3144 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:11:10.575066 kubelet[3144]: I0913 00:11:10.574358 3144 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:11:10.575066 kubelet[3144]: I0913 00:11:10.574370 3144 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:11:10.575066 kubelet[3144]: E0913 00:11:10.574420 3144 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:11:10.586358 kubelet[3144]: I0913 00:11:10.586171 3144 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:11:10.586358 kubelet[3144]: I0913 00:11:10.586303 3144 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:11:10.586685 kubelet[3144]: I0913 00:11:10.586635 3144 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:11:10.615055 kubelet[3144]: E0913 00:11:10.614998 3144 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:11:10.654365 kubelet[3144]: I0913 00:11:10.654337 3144 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654537 3144 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654563 3144 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654754 3144 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654768 3144 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654790 3144 policy_none.go:49] "None policy: Start" Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654805 3144 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:11:10.654924 kubelet[3144]: I0913 00:11:10.654817 3144 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:11:10.655259 kubelet[3144]: I0913 00:11:10.655150 3144 state_mem.go:75] "Updated machine memory state" Sep 13 00:11:10.659269 kubelet[3144]: I0913 00:11:10.659003 3144 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:11:10.660359 kubelet[3144]: I0913 00:11:10.660076 3144 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:11:10.660359 kubelet[3144]: I0913 00:11:10.660100 3144 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:11:10.660359 kubelet[3144]: I0913 00:11:10.660360 3144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:11:10.662502 kubelet[3144]: E0913 00:11:10.662357 3144 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:11:10.675946 kubelet[3144]: I0913 00:11:10.675856 3144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.677546 kubelet[3144]: I0913 00:11:10.676242 3144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.679360 kubelet[3144]: I0913 00:11:10.676381 3144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.690610 kubelet[3144]: W0913 00:11:10.690579 3144 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:11:10.691169 kubelet[3144]: W0913 00:11:10.691051 3144 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:11:10.691169 kubelet[3144]: E0913 00:11:10.691109 3144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.691929 kubelet[3144]: W0913 00:11:10.691828 3144 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:11:10.691929 kubelet[3144]: E0913 00:11:10.691869 3144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-e49e858a9f\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.774513 kubelet[3144]: I0913 00:11:10.774067 3144 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.790269 kubelet[3144]: I0913 00:11:10.790227 3144 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.790480 kubelet[3144]: I0913 00:11:10.790331 3144 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863161 kubelet[3144]: I0913 00:11:10.863072 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863161 kubelet[3144]: I0913 00:11:10.863136 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863161 kubelet[3144]: I0913 00:11:10.863163 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863540 kubelet[3144]: I0913 00:11:10.863189 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7516f3f23e7f086c46b88e2f88e2964d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-e49e858a9f\" (UID: \"7516f3f23e7f086c46b88e2f88e2964d\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863540 kubelet[3144]: I0913 00:11:10.863215 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d90fe91c2b6aa3c2e879543ffa210-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" (UID: \"bc5d90fe91c2b6aa3c2e879543ffa210\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863540 kubelet[3144]: I0913 00:11:10.863234 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc5d90fe91c2b6aa3c2e879543ffa210-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" (UID: \"bc5d90fe91c2b6aa3c2e879543ffa210\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863540 kubelet[3144]: I0913 00:11:10.863253 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc5d90fe91c2b6aa3c2e879543ffa210-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-e49e858a9f\" (UID: \"bc5d90fe91c2b6aa3c2e879543ffa210\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863540 kubelet[3144]: I0913 00:11:10.863277 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:10.863696 kubelet[3144]: I0913 00:11:10.863299 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/89e2157129dc4912c118891ce2a7a9aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-e49e858a9f\" (UID: \"89e2157129dc4912c118891ce2a7a9aa\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:11.547082 kubelet[3144]: I0913 00:11:11.547034 3144 apiserver.go:52] "Watching apiserver" Sep 13 00:11:11.562092 kubelet[3144]: I0913 00:11:11.562008 3144 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:11:13.620738 kubelet[3144]: I0913 00:11:11.630155 3144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:13.620738 kubelet[3144]: I0913 00:11:11.640235 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-e49e858a9f" podStartSLOduration=2.640213301 podStartE2EDuration="2.640213301s" podCreationTimestamp="2025-09-13 00:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:11.639968997 +0000 UTC m=+1.198959354" watchObservedRunningTime="2025-09-13 00:11:11.640213301 +0000 UTC m=+1.199203558" Sep 13 00:11:13.620738 kubelet[3144]: I0913 00:11:11.640405 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" podStartSLOduration=2.640396604 podStartE2EDuration="2.640396604s" podCreationTimestamp="2025-09-13 00:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:11.627894192 +0000 UTC m=+1.186884449" watchObservedRunningTime="2025-09-13 00:11:11.640396604 +0000 UTC m=+1.199386861" Sep 13 00:11:13.620738 kubelet[3144]: W0913 00:11:11.644938 3144 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:11:13.620738 kubelet[3144]: E0913 00:11:11.645187 3144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-e49e858a9f\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-e49e858a9f" Sep 13 00:11:13.621498 kubelet[3144]: I0913 00:11:11.662930 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-e49e858a9f" podStartSLOduration=1.662908586 podStartE2EDuration="1.662908586s" podCreationTimestamp="2025-09-13 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:11.662900886 +0000 UTC m=+1.221891243" watchObservedRunningTime="2025-09-13 00:11:11.662908586 +0000 UTC m=+1.221898843" Sep 13 00:11:15.149773 kubelet[3144]: I0913 00:11:15.149734 3144 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:11:15.150292 containerd[1691]: time="2025-09-13T00:11:15.150182896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:11:15.150604 kubelet[3144]: I0913 00:11:15.150396 3144 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:11:16.060844 systemd[1]: Created slice kubepods-besteffort-podb0e81f91_514e_4e9d_80ba_799841577c58.slice - libcontainer container kubepods-besteffort-podb0e81f91_514e_4e9d_80ba_799841577c58.slice. Sep 13 00:11:16.099587 kubelet[3144]: I0913 00:11:16.099534 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0e81f91-514e-4e9d-80ba-799841577c58-kube-proxy\") pod \"kube-proxy-4c86l\" (UID: \"b0e81f91-514e-4e9d-80ba-799841577c58\") " pod="kube-system/kube-proxy-4c86l" Sep 13 00:11:16.099587 kubelet[3144]: I0913 00:11:16.099585 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0e81f91-514e-4e9d-80ba-799841577c58-xtables-lock\") pod \"kube-proxy-4c86l\" (UID: \"b0e81f91-514e-4e9d-80ba-799841577c58\") " pod="kube-system/kube-proxy-4c86l" Sep 13 00:11:16.099812 kubelet[3144]: I0913 00:11:16.099608 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0e81f91-514e-4e9d-80ba-799841577c58-lib-modules\") pod \"kube-proxy-4c86l\" (UID: \"b0e81f91-514e-4e9d-80ba-799841577c58\") " pod="kube-system/kube-proxy-4c86l" Sep 13 00:11:16.099812 kubelet[3144]: I0913 00:11:16.099628 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbwkd\" (UniqueName: \"kubernetes.io/projected/b0e81f91-514e-4e9d-80ba-799841577c58-kube-api-access-bbwkd\") pod \"kube-proxy-4c86l\" (UID: \"b0e81f91-514e-4e9d-80ba-799841577c58\") " pod="kube-system/kube-proxy-4c86l" Sep 13 00:11:17.271784 containerd[1691]: time="2025-09-13T00:11:17.271696514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4c86l,Uid:b0e81f91-514e-4e9d-80ba-799841577c58,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:17.540124 kubelet[3144]: I0913 00:11:17.539828 3144 status_manager.go:890] "Failed to get status for pod" podUID="d8e02046-cec9-4470-879f-0b12cbe5ec93" pod="tigera-operator/tigera-operator-755d956888-p9pzj" err="pods \"tigera-operator-755d956888-p9pzj\" is forbidden: User \"system:node:ci-4081.3.5-n-e49e858a9f\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.5-n-e49e858a9f' and this object" Sep 13 00:11:17.540124 kubelet[3144]: W0913 00:11:17.540074 3144 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.5-n-e49e858a9f" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.5-n-e49e858a9f' and this object Sep 13 00:11:17.540124 kubelet[3144]: E0913 00:11:17.540112 3144 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.5-n-e49e858a9f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.5-n-e49e858a9f' and this object" logger="UnhandledError" Sep 13 00:11:17.540764 kubelet[3144]: W0913 00:11:17.540288 3144 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.5-n-e49e858a9f" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.5-n-e49e858a9f' and this object Sep 13 00:11:17.540764 kubelet[3144]: E0913 00:11:17.540314 3144 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081.3.5-n-e49e858a9f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.5-n-e49e858a9f' and this object" logger="UnhandledError" Sep 13 00:11:17.550502 systemd[1]: Created slice kubepods-besteffort-podd8e02046_cec9_4470_879f_0b12cbe5ec93.slice - libcontainer container kubepods-besteffort-podd8e02046_cec9_4470_879f_0b12cbe5ec93.slice. Sep 13 00:11:17.606918 kubelet[3144]: I0913 00:11:17.606866 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d92k9\" (UniqueName: \"kubernetes.io/projected/d8e02046-cec9-4470-879f-0b12cbe5ec93-kube-api-access-d92k9\") pod \"tigera-operator-755d956888-p9pzj\" (UID: \"d8e02046-cec9-4470-879f-0b12cbe5ec93\") " pod="tigera-operator/tigera-operator-755d956888-p9pzj" Sep 13 00:11:17.606918 kubelet[3144]: I0913 00:11:17.606921 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8e02046-cec9-4470-879f-0b12cbe5ec93-var-lib-calico\") pod \"tigera-operator-755d956888-p9pzj\" (UID: \"d8e02046-cec9-4470-879f-0b12cbe5ec93\") " pod="tigera-operator/tigera-operator-755d956888-p9pzj" Sep 13 00:11:18.713941 kubelet[3144]: E0913 00:11:18.713870 3144 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:11:18.713941 kubelet[3144]: E0913 00:11:18.713935 3144 projected.go:194] Error preparing data for projected volume kube-api-access-d92k9 for pod tigera-operator/tigera-operator-755d956888-p9pzj: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:11:18.714685 kubelet[3144]: E0913 00:11:18.714065 3144 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8e02046-cec9-4470-879f-0b12cbe5ec93-kube-api-access-d92k9 podName:d8e02046-cec9-4470-879f-0b12cbe5ec93 nodeName:}" failed. No retries permitted until 2025-09-13 00:11:19.21400712 +0000 UTC m=+8.772997377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d92k9" (UniqueName: "kubernetes.io/projected/d8e02046-cec9-4470-879f-0b12cbe5ec93-kube-api-access-d92k9") pod "tigera-operator-755d956888-p9pzj" (UID: "d8e02046-cec9-4470-879f-0b12cbe5ec93") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:11:19.357441 containerd[1691]: time="2025-09-13T00:11:19.357382140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-p9pzj,Uid:d8e02046-cec9-4470-879f-0b12cbe5ec93,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:11:23.663823 containerd[1691]: time="2025-09-13T00:11:23.663682152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:23.664945 containerd[1691]: time="2025-09-13T00:11:23.663824054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:23.664945 containerd[1691]: time="2025-09-13T00:11:23.663881355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.667390 containerd[1691]: time="2025-09-13T00:11:23.667101708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:23.667390 containerd[1691]: time="2025-09-13T00:11:23.667159609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:23.667390 containerd[1691]: time="2025-09-13T00:11:23.667179209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.667390 containerd[1691]: time="2025-09-13T00:11:23.667264811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.667672 containerd[1691]: time="2025-09-13T00:11:23.664621367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:23.709214 systemd[1]: Started cri-containerd-709860542501695992c7b9d6db6d974eed92e2c75e7b87705e5bbd01d3aa0873.scope - libcontainer container 709860542501695992c7b9d6db6d974eed92e2c75e7b87705e5bbd01d3aa0873. Sep 13 00:11:23.710853 systemd[1]: Started cri-containerd-94dec44925d00f5b99c21a6cefa3ec59b59a73de46de402d5909692fd0abe7f2.scope - libcontainer container 94dec44925d00f5b99c21a6cefa3ec59b59a73de46de402d5909692fd0abe7f2. Sep 13 00:11:23.756356 containerd[1691]: time="2025-09-13T00:11:23.756242772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4c86l,Uid:b0e81f91-514e-4e9d-80ba-799841577c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"709860542501695992c7b9d6db6d974eed92e2c75e7b87705e5bbd01d3aa0873\"" Sep 13 00:11:23.766144 containerd[1691]: time="2025-09-13T00:11:23.765401022Z" level=info msg="CreateContainer within sandbox \"709860542501695992c7b9d6db6d974eed92e2c75e7b87705e5bbd01d3aa0873\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:11:23.783604 containerd[1691]: time="2025-09-13T00:11:23.783561720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-p9pzj,Uid:d8e02046-cec9-4470-879f-0b12cbe5ec93,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"94dec44925d00f5b99c21a6cefa3ec59b59a73de46de402d5909692fd0abe7f2\"" Sep 13 00:11:23.787475 containerd[1691]: time="2025-09-13T00:11:23.787407384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:11:23.822330 containerd[1691]: time="2025-09-13T00:11:23.822270656Z" level=info msg="CreateContainer within sandbox \"709860542501695992c7b9d6db6d974eed92e2c75e7b87705e5bbd01d3aa0873\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2291bc2546b8fd9cacf88152b5d42391d215c9e594c2f07414f65fda623039d6\"" Sep 13 00:11:23.823043 containerd[1691]: time="2025-09-13T00:11:23.822951567Z" level=info msg="StartContainer for \"2291bc2546b8fd9cacf88152b5d42391d215c9e594c2f07414f65fda623039d6\"" Sep 13 00:11:23.856252 systemd[1]: Started cri-containerd-2291bc2546b8fd9cacf88152b5d42391d215c9e594c2f07414f65fda623039d6.scope - libcontainer container 2291bc2546b8fd9cacf88152b5d42391d215c9e594c2f07414f65fda623039d6. Sep 13 00:11:23.893187 containerd[1691]: time="2025-09-13T00:11:23.893134720Z" level=info msg="StartContainer for \"2291bc2546b8fd9cacf88152b5d42391d215c9e594c2f07414f65fda623039d6\" returns successfully" Sep 13 00:11:24.669476 kubelet[3144]: I0913 00:11:24.669412 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4c86l" podStartSLOduration=8.669394567 podStartE2EDuration="8.669394567s" podCreationTimestamp="2025-09-13 00:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:24.66899616 +0000 UTC m=+14.227986517" watchObservedRunningTime="2025-09-13 00:11:24.669394567 +0000 UTC m=+14.228384824" Sep 13 00:11:25.726798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534604651.mount: Deactivated successfully. Sep 13 00:11:26.384649 containerd[1691]: time="2025-09-13T00:11:26.384595032Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:26.387178 containerd[1691]: time="2025-09-13T00:11:26.387110474Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:11:26.389788 containerd[1691]: time="2025-09-13T00:11:26.389733617Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:26.394149 containerd[1691]: time="2025-09-13T00:11:26.394097988Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:26.395343 containerd[1691]: time="2025-09-13T00:11:26.394757699Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.607299515s" Sep 13 00:11:26.395343 containerd[1691]: time="2025-09-13T00:11:26.394797200Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:11:26.397196 containerd[1691]: time="2025-09-13T00:11:26.397163339Z" level=info msg="CreateContainer within sandbox \"94dec44925d00f5b99c21a6cefa3ec59b59a73de46de402d5909692fd0abe7f2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:11:26.427795 containerd[1691]: time="2025-09-13T00:11:26.427756741Z" level=info msg="CreateContainer within sandbox \"94dec44925d00f5b99c21a6cefa3ec59b59a73de46de402d5909692fd0abe7f2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"641e31995730a418ba4eb2899b67083df4a7bc2c4f13e8b7b383f4bb1ab46e4f\"" Sep 13 00:11:26.428560 containerd[1691]: time="2025-09-13T00:11:26.428399152Z" level=info msg="StartContainer for \"641e31995730a418ba4eb2899b67083df4a7bc2c4f13e8b7b383f4bb1ab46e4f\"" Sep 13 00:11:26.459327 systemd[1]: Started cri-containerd-641e31995730a418ba4eb2899b67083df4a7bc2c4f13e8b7b383f4bb1ab46e4f.scope - libcontainer container 641e31995730a418ba4eb2899b67083df4a7bc2c4f13e8b7b383f4bb1ab46e4f. Sep 13 00:11:26.489042 containerd[1691]: time="2025-09-13T00:11:26.488876645Z" level=info msg="StartContainer for \"641e31995730a418ba4eb2899b67083df4a7bc2c4f13e8b7b383f4bb1ab46e4f\" returns successfully" Sep 13 00:11:32.923366 sudo[2228]: pam_unix(sudo:session): session closed for user root Sep 13 00:11:33.027312 sshd[2225]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:33.031582 systemd-logind[1677]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:11:33.032531 systemd[1]: sshd@6-10.200.8.10:22-10.200.16.10:59324.service: Deactivated successfully. Sep 13 00:11:33.036966 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:11:33.037766 systemd[1]: session-9.scope: Consumed 4.316s CPU time, 157.6M memory peak, 0B memory swap peak. Sep 13 00:11:33.040931 systemd-logind[1677]: Removed session 9. Sep 13 00:11:37.066035 kubelet[3144]: I0913 00:11:37.065682 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-p9pzj" podStartSLOduration=17.45545988 podStartE2EDuration="20.065659342s" podCreationTimestamp="2025-09-13 00:11:17 +0000 UTC" firstStartedPulling="2025-09-13 00:11:23.785421651 +0000 UTC m=+13.344412008" lastFinishedPulling="2025-09-13 00:11:26.395621213 +0000 UTC m=+15.954611470" observedRunningTime="2025-09-13 00:11:26.674627595 +0000 UTC m=+16.233617852" watchObservedRunningTime="2025-09-13 00:11:37.065659342 +0000 UTC m=+26.624649699" Sep 13 00:11:37.081822 systemd[1]: Created slice kubepods-besteffort-pod464650e7_6e13_406c_a3b4_010932c952bb.slice - libcontainer container kubepods-besteffort-pod464650e7_6e13_406c_a3b4_010932c952bb.slice. Sep 13 00:11:37.134373 kubelet[3144]: I0913 00:11:37.134307 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/464650e7-6e13-406c-a3b4-010932c952bb-tigera-ca-bundle\") pod \"calico-typha-676cfb8487-5f2b2\" (UID: \"464650e7-6e13-406c-a3b4-010932c952bb\") " pod="calico-system/calico-typha-676cfb8487-5f2b2" Sep 13 00:11:37.134373 kubelet[3144]: I0913 00:11:37.134377 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/464650e7-6e13-406c-a3b4-010932c952bb-typha-certs\") pod \"calico-typha-676cfb8487-5f2b2\" (UID: \"464650e7-6e13-406c-a3b4-010932c952bb\") " pod="calico-system/calico-typha-676cfb8487-5f2b2" Sep 13 00:11:37.134626 kubelet[3144]: I0913 00:11:37.134412 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdqt\" (UniqueName: \"kubernetes.io/projected/464650e7-6e13-406c-a3b4-010932c952bb-kube-api-access-stdqt\") pod \"calico-typha-676cfb8487-5f2b2\" (UID: \"464650e7-6e13-406c-a3b4-010932c952bb\") " pod="calico-system/calico-typha-676cfb8487-5f2b2" Sep 13 00:11:37.210733 systemd[1]: Created slice kubepods-besteffort-pod287c4c43_ea05_4192_900c_d691bb53c43c.slice - libcontainer container kubepods-besteffort-pod287c4c43_ea05_4192_900c_d691bb53c43c.slice. Sep 13 00:11:37.235447 kubelet[3144]: I0913 00:11:37.235110 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-cni-log-dir\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.235850 kubelet[3144]: I0913 00:11:37.235539 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llb5n\" (UniqueName: \"kubernetes.io/projected/287c4c43-ea05-4192-900c-d691bb53c43c-kube-api-access-llb5n\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.235850 kubelet[3144]: I0913 00:11:37.235607 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-cni-net-dir\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.235850 kubelet[3144]: I0913 00:11:37.235633 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-var-lib-calico\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.235850 kubelet[3144]: I0913 00:11:37.235660 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-xtables-lock\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.235850 kubelet[3144]: I0913 00:11:37.235681 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-cni-bin-dir\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.236073 kubelet[3144]: I0913 00:11:37.235702 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-lib-modules\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.236073 kubelet[3144]: I0913 00:11:37.235723 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/287c4c43-ea05-4192-900c-d691bb53c43c-tigera-ca-bundle\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.236073 kubelet[3144]: I0913 00:11:37.235752 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-flexvol-driver-host\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.236073 kubelet[3144]: I0913 00:11:37.235773 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/287c4c43-ea05-4192-900c-d691bb53c43c-node-certs\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.236073 kubelet[3144]: I0913 00:11:37.235794 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-var-run-calico\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.236278 kubelet[3144]: I0913 00:11:37.235831 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/287c4c43-ea05-4192-900c-d691bb53c43c-policysync\") pod \"calico-node-qsxlr\" (UID: \"287c4c43-ea05-4192-900c-d691bb53c43c\") " pod="calico-system/calico-node-qsxlr" Sep 13 00:11:37.344812 kubelet[3144]: E0913 00:11:37.343751 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.344812 kubelet[3144]: W0913 00:11:37.343787 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.344812 kubelet[3144]: E0913 00:11:37.343833 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.351129 kubelet[3144]: E0913 00:11:37.350142 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.351129 kubelet[3144]: W0913 00:11:37.350168 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.351129 kubelet[3144]: E0913 00:11:37.350196 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.368181 kubelet[3144]: E0913 00:11:37.368143 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.368441 kubelet[3144]: W0913 00:11:37.368355 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.368441 kubelet[3144]: E0913 00:11:37.368395 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.388331 containerd[1691]: time="2025-09-13T00:11:37.388271249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-676cfb8487-5f2b2,Uid:464650e7-6e13-406c-a3b4-010932c952bb,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:37.446349 containerd[1691]: time="2025-09-13T00:11:37.445481090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:37.446349 containerd[1691]: time="2025-09-13T00:11:37.445547391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:37.446349 containerd[1691]: time="2025-09-13T00:11:37.445588792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:37.446349 containerd[1691]: time="2025-09-13T00:11:37.445717694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:37.481260 systemd[1]: Started cri-containerd-896053da72e7b591002432fb1d3a20f797a71c9e7165a08f0884bae7991367a7.scope - libcontainer container 896053da72e7b591002432fb1d3a20f797a71c9e7165a08f0884bae7991367a7. Sep 13 00:11:37.516425 kubelet[3144]: E0913 00:11:37.516094 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.516425 kubelet[3144]: W0913 00:11:37.516123 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.516425 kubelet[3144]: E0913 00:11:37.516149 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.519367 containerd[1691]: time="2025-09-13T00:11:37.517275871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsxlr,Uid:287c4c43-ea05-4192-900c-d691bb53c43c,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:37.519533 kubelet[3144]: E0913 00:11:37.516270 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:37.519533 kubelet[3144]: E0913 00:11:37.518446 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.519533 kubelet[3144]: W0913 00:11:37.518464 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.519533 kubelet[3144]: E0913 00:11:37.518488 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.519533 kubelet[3144]: E0913 00:11:37.518877 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.519533 kubelet[3144]: W0913 00:11:37.518915 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.519533 kubelet[3144]: E0913 00:11:37.518937 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.519872 kubelet[3144]: E0913 00:11:37.519698 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.519872 kubelet[3144]: W0913 00:11:37.519713 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.519872 kubelet[3144]: E0913 00:11:37.519731 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.521157 kubelet[3144]: E0913 00:11:37.520100 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.521157 kubelet[3144]: W0913 00:11:37.520117 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.521157 kubelet[3144]: E0913 00:11:37.520133 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.521157 kubelet[3144]: E0913 00:11:37.520385 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.521157 kubelet[3144]: W0913 00:11:37.520422 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.521157 kubelet[3144]: E0913 00:11:37.520435 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.521157 kubelet[3144]: E0913 00:11:37.520910 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.521157 kubelet[3144]: W0913 00:11:37.520926 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.521157 kubelet[3144]: E0913 00:11:37.520940 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.521671 kubelet[3144]: E0913 00:11:37.521531 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.521671 kubelet[3144]: W0913 00:11:37.521544 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.521671 kubelet[3144]: E0913 00:11:37.521560 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.522498 kubelet[3144]: E0913 00:11:37.522478 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.522498 kubelet[3144]: W0913 00:11:37.522496 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.523240 kubelet[3144]: E0913 00:11:37.522521 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.523240 kubelet[3144]: E0913 00:11:37.522758 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.523240 kubelet[3144]: W0913 00:11:37.522770 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.523240 kubelet[3144]: E0913 00:11:37.522784 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.523240 kubelet[3144]: E0913 00:11:37.522982 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.523240 kubelet[3144]: W0913 00:11:37.522994 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.523240 kubelet[3144]: E0913 00:11:37.523007 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.523240 kubelet[3144]: E0913 00:11:37.523230 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.523240 kubelet[3144]: W0913 00:11:37.523242 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523255 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523486 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.524427 kubelet[3144]: W0913 00:11:37.523496 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523508 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523773 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.524427 kubelet[3144]: W0913 00:11:37.523784 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523796 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523967 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.524427 kubelet[3144]: W0913 00:11:37.523976 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.524427 kubelet[3144]: E0913 00:11:37.523988 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.524221 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.525499 kubelet[3144]: W0913 00:11:37.524233 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.524246 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.524476 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.525499 kubelet[3144]: W0913 00:11:37.524487 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.524500 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.525117 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.525499 kubelet[3144]: W0913 00:11:37.525135 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.525150 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.525499 kubelet[3144]: E0913 00:11:37.525377 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.525826 kubelet[3144]: W0913 00:11:37.525389 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.525826 kubelet[3144]: E0913 00:11:37.525401 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.525826 kubelet[3144]: E0913 00:11:37.525628 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.525826 kubelet[3144]: W0913 00:11:37.525640 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.525826 kubelet[3144]: E0913 00:11:37.525653 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.539162 kubelet[3144]: E0913 00:11:37.539123 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.539162 kubelet[3144]: W0913 00:11:37.539156 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.539435 kubelet[3144]: E0913 00:11:37.539184 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.539435 kubelet[3144]: I0913 00:11:37.539225 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cafccf29-04c8-4022-9a33-4b449e2cbfbb-kubelet-dir\") pod \"csi-node-driver-x62xh\" (UID: \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\") " pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:37.540099 kubelet[3144]: E0913 00:11:37.540066 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.540099 kubelet[3144]: W0913 00:11:37.540094 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.540496 kubelet[3144]: E0913 00:11:37.540125 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.540496 kubelet[3144]: I0913 00:11:37.540155 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twz9v\" (UniqueName: \"kubernetes.io/projected/cafccf29-04c8-4022-9a33-4b449e2cbfbb-kube-api-access-twz9v\") pod \"csi-node-driver-x62xh\" (UID: \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\") " pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:37.540815 kubelet[3144]: E0913 00:11:37.540745 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.540815 kubelet[3144]: W0913 00:11:37.540775 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.540815 kubelet[3144]: E0913 00:11:37.540794 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.541197 kubelet[3144]: I0913 00:11:37.540819 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cafccf29-04c8-4022-9a33-4b449e2cbfbb-registration-dir\") pod \"csi-node-driver-x62xh\" (UID: \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\") " pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:37.542390 kubelet[3144]: E0913 00:11:37.541627 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.542390 kubelet[3144]: W0913 00:11:37.541645 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.542390 kubelet[3144]: E0913 00:11:37.541807 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.542390 kubelet[3144]: I0913 00:11:37.541842 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cafccf29-04c8-4022-9a33-4b449e2cbfbb-socket-dir\") pod \"csi-node-driver-x62xh\" (UID: \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\") " pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:37.542619 kubelet[3144]: E0913 00:11:37.542505 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.542619 kubelet[3144]: W0913 00:11:37.542518 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.544400 kubelet[3144]: E0913 00:11:37.544224 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.545153 kubelet[3144]: E0913 00:11:37.544759 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.545153 kubelet[3144]: W0913 00:11:37.544773 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.545153 kubelet[3144]: E0913 00:11:37.544906 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.546068 kubelet[3144]: E0913 00:11:37.545725 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.546068 kubelet[3144]: W0913 00:11:37.545738 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.546068 kubelet[3144]: E0913 00:11:37.545780 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.547399 kubelet[3144]: E0913 00:11:37.547324 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.547399 kubelet[3144]: W0913 00:11:37.547340 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.547911 kubelet[3144]: E0913 00:11:37.547604 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.547911 kubelet[3144]: I0913 00:11:37.547650 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cafccf29-04c8-4022-9a33-4b449e2cbfbb-varrun\") pod \"csi-node-driver-x62xh\" (UID: \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\") " pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:37.548473 kubelet[3144]: E0913 00:11:37.548114 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.548473 kubelet[3144]: W0913 00:11:37.548130 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.548473 kubelet[3144]: E0913 00:11:37.548443 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.549301 kubelet[3144]: E0913 00:11:37.549098 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.549301 kubelet[3144]: W0913 00:11:37.549113 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.549301 kubelet[3144]: E0913 00:11:37.549130 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.551610 kubelet[3144]: E0913 00:11:37.551594 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.551814 kubelet[3144]: W0913 00:11:37.551795 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.551977 kubelet[3144]: E0913 00:11:37.551921 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.552527 kubelet[3144]: E0913 00:11:37.552425 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.553061 kubelet[3144]: W0913 00:11:37.552675 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.553180 kubelet[3144]: E0913 00:11:37.553164 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.553938 kubelet[3144]: E0913 00:11:37.553922 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.554147 kubelet[3144]: W0913 00:11:37.554114 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.554477 kubelet[3144]: E0913 00:11:37.554373 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.555237 kubelet[3144]: E0913 00:11:37.555131 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.555444 kubelet[3144]: W0913 00:11:37.555425 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.555637 kubelet[3144]: E0913 00:11:37.555619 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.556729 kubelet[3144]: E0913 00:11:37.556634 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.556729 kubelet[3144]: W0913 00:11:37.556651 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.556729 kubelet[3144]: E0913 00:11:37.556666 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.578762 containerd[1691]: time="2025-09-13T00:11:37.578646680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:37.578762 containerd[1691]: time="2025-09-13T00:11:37.578725982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:37.579739 containerd[1691]: time="2025-09-13T00:11:37.579408093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:37.579739 containerd[1691]: time="2025-09-13T00:11:37.579667797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:37.608266 systemd[1]: Started cri-containerd-61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298.scope - libcontainer container 61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298. Sep 13 00:11:37.651725 kubelet[3144]: E0913 00:11:37.651692 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.651986 kubelet[3144]: W0913 00:11:37.651960 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.652170 kubelet[3144]: E0913 00:11:37.652150 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.653621 kubelet[3144]: E0913 00:11:37.653447 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.653621 kubelet[3144]: W0913 00:11:37.653467 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.653621 kubelet[3144]: E0913 00:11:37.653489 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.654455 kubelet[3144]: E0913 00:11:37.654267 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.654455 kubelet[3144]: W0913 00:11:37.654284 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.654455 kubelet[3144]: E0913 00:11:37.654314 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.655307 kubelet[3144]: E0913 00:11:37.654918 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.655307 kubelet[3144]: W0913 00:11:37.654935 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.655307 kubelet[3144]: E0913 00:11:37.654962 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.656222 kubelet[3144]: E0913 00:11:37.655693 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.656222 kubelet[3144]: W0913 00:11:37.655712 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.656222 kubelet[3144]: E0913 00:11:37.655752 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.657035 kubelet[3144]: E0913 00:11:37.656783 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.657035 kubelet[3144]: W0913 00:11:37.656800 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.657035 kubelet[3144]: E0913 00:11:37.656920 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.658089 kubelet[3144]: E0913 00:11:37.657255 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.658089 kubelet[3144]: W0913 00:11:37.657269 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.658089 kubelet[3144]: E0913 00:11:37.658007 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.658729 kubelet[3144]: E0913 00:11:37.658511 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.658729 kubelet[3144]: W0913 00:11:37.658526 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.658729 kubelet[3144]: E0913 00:11:37.658701 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.659589 kubelet[3144]: E0913 00:11:37.659162 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.659589 kubelet[3144]: W0913 00:11:37.659178 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.659589 kubelet[3144]: E0913 00:11:37.659412 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.659995 kubelet[3144]: E0913 00:11:37.659831 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.659995 kubelet[3144]: W0913 00:11:37.659845 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.661803 kubelet[3144]: E0913 00:11:37.660838 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.661803 kubelet[3144]: W0913 00:11:37.660853 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.661803 kubelet[3144]: E0913 00:11:37.661748 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.661803 kubelet[3144]: E0913 00:11:37.661770 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.662640 kubelet[3144]: E0913 00:11:37.662624 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.662743 kubelet[3144]: W0913 00:11:37.662730 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.663087 kubelet[3144]: E0913 00:11:37.663069 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.663821 kubelet[3144]: E0913 00:11:37.663803 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.664062 kubelet[3144]: W0913 00:11:37.663982 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.664250 kubelet[3144]: E0913 00:11:37.664234 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.664750 kubelet[3144]: E0913 00:11:37.664536 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.664750 kubelet[3144]: W0913 00:11:37.664550 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.664750 kubelet[3144]: E0913 00:11:37.664724 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.665883 kubelet[3144]: E0913 00:11:37.665464 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.665883 kubelet[3144]: W0913 00:11:37.665479 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.666532 kubelet[3144]: E0913 00:11:37.666330 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.666532 kubelet[3144]: W0913 00:11:37.666345 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.666824 kubelet[3144]: E0913 00:11:37.666710 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.667261 kubelet[3144]: E0913 00:11:37.666971 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.668508 kubelet[3144]: E0913 00:11:37.668119 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.668508 kubelet[3144]: W0913 00:11:37.668137 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.668508 kubelet[3144]: E0913 00:11:37.668237 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.668508 kubelet[3144]: E0913 00:11:37.668413 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.668508 kubelet[3144]: W0913 00:11:37.668423 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.669263 kubelet[3144]: E0913 00:11:37.668759 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.669522 kubelet[3144]: E0913 00:11:37.669507 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.669617 kubelet[3144]: W0913 00:11:37.669602 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.670250 kubelet[3144]: E0913 00:11:37.670232 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.670389 kubelet[3144]: E0913 00:11:37.670351 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.670489 kubelet[3144]: W0913 00:11:37.670478 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.670832 kubelet[3144]: E0913 00:11:37.670712 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.673352 kubelet[3144]: E0913 00:11:37.673156 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.673352 kubelet[3144]: W0913 00:11:37.673173 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.673352 kubelet[3144]: E0913 00:11:37.673202 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.674936 kubelet[3144]: E0913 00:11:37.674816 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.674936 kubelet[3144]: W0913 00:11:37.674832 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.675468 kubelet[3144]: E0913 00:11:37.675323 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.676125 kubelet[3144]: E0913 00:11:37.675844 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.676125 kubelet[3144]: W0913 00:11:37.675860 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.676538 kubelet[3144]: E0913 00:11:37.676342 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.677342 kubelet[3144]: E0913 00:11:37.676921 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.677342 kubelet[3144]: W0913 00:11:37.676937 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.677342 kubelet[3144]: E0913 00:11:37.676958 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.678261 kubelet[3144]: E0913 00:11:37.678243 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.678353 kubelet[3144]: W0913 00:11:37.678339 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.678604 kubelet[3144]: E0913 00:11:37.678564 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:37.694528 containerd[1691]: time="2025-09-13T00:11:37.694484786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-676cfb8487-5f2b2,Uid:464650e7-6e13-406c-a3b4-010932c952bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"896053da72e7b591002432fb1d3a20f797a71c9e7165a08f0884bae7991367a7\"" Sep 13 00:11:37.699213 containerd[1691]: time="2025-09-13T00:11:37.699087262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsxlr,Uid:287c4c43-ea05-4192-900c-d691bb53c43c,Namespace:calico-system,Attempt:0,} returns sandbox id \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\"" Sep 13 00:11:37.700289 containerd[1691]: time="2025-09-13T00:11:37.700252981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:11:37.707810 kubelet[3144]: E0913 00:11:37.707703 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:37.707810 kubelet[3144]: W0913 00:11:37.707727 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:37.707810 kubelet[3144]: E0913 00:11:37.707765 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:38.922822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468458970.mount: Deactivated successfully. Sep 13 00:11:39.575971 kubelet[3144]: E0913 00:11:39.575587 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:40.696182 containerd[1691]: time="2025-09-13T00:11:40.695550052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:40.698967 containerd[1691]: time="2025-09-13T00:11:40.698788105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:11:40.703291 containerd[1691]: time="2025-09-13T00:11:40.703232278Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:40.708958 containerd[1691]: time="2025-09-13T00:11:40.708077557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:40.709902 containerd[1691]: time="2025-09-13T00:11:40.709713384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.00863219s" Sep 13 00:11:40.709902 containerd[1691]: time="2025-09-13T00:11:40.709756785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:11:40.713810 containerd[1691]: time="2025-09-13T00:11:40.713142440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:11:40.744488 containerd[1691]: time="2025-09-13T00:11:40.744444653Z" level=info msg="CreateContainer within sandbox \"896053da72e7b591002432fb1d3a20f797a71c9e7165a08f0884bae7991367a7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:11:40.783174 containerd[1691]: time="2025-09-13T00:11:40.783129987Z" level=info msg="CreateContainer within sandbox \"896053da72e7b591002432fb1d3a20f797a71c9e7165a08f0884bae7991367a7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e9b93b38ef0137881681bb3583c928cb00f252dfd13d5928add548873ffbce70\"" Sep 13 00:11:40.785087 containerd[1691]: time="2025-09-13T00:11:40.784125103Z" level=info msg="StartContainer for \"e9b93b38ef0137881681bb3583c928cb00f252dfd13d5928add548873ffbce70\"" Sep 13 00:11:40.817593 systemd[1]: Started cri-containerd-e9b93b38ef0137881681bb3583c928cb00f252dfd13d5928add548873ffbce70.scope - libcontainer container e9b93b38ef0137881681bb3583c928cb00f252dfd13d5928add548873ffbce70. Sep 13 00:11:40.883799 containerd[1691]: time="2025-09-13T00:11:40.883748135Z" level=info msg="StartContainer for \"e9b93b38ef0137881681bb3583c928cb00f252dfd13d5928add548873ffbce70\" returns successfully" Sep 13 00:11:41.576414 kubelet[3144]: E0913 00:11:41.575242 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:41.710944 kubelet[3144]: I0913 00:11:41.710499 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-676cfb8487-5f2b2" podStartSLOduration=2.698340532 podStartE2EDuration="5.710477779s" podCreationTimestamp="2025-09-13 00:11:36 +0000 UTC" firstStartedPulling="2025-09-13 00:11:37.699401567 +0000 UTC m=+27.258391924" lastFinishedPulling="2025-09-13 00:11:40.711538814 +0000 UTC m=+30.270529171" observedRunningTime="2025-09-13 00:11:41.708168641 +0000 UTC m=+31.267158898" watchObservedRunningTime="2025-09-13 00:11:41.710477779 +0000 UTC m=+31.269468036" Sep 13 00:11:41.758705 kubelet[3144]: E0913 00:11:41.758659 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.758705 kubelet[3144]: W0913 00:11:41.758695 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.758963 kubelet[3144]: E0913 00:11:41.758725 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.759038 kubelet[3144]: E0913 00:11:41.759006 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.759092 kubelet[3144]: W0913 00:11:41.759042 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.759092 kubelet[3144]: E0913 00:11:41.759061 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.759297 kubelet[3144]: E0913 00:11:41.759280 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.759297 kubelet[3144]: W0913 00:11:41.759292 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.759404 kubelet[3144]: E0913 00:11:41.759306 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.759621 kubelet[3144]: E0913 00:11:41.759586 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.759953 kubelet[3144]: W0913 00:11:41.759626 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.759953 kubelet[3144]: E0913 00:11:41.759642 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.759953 kubelet[3144]: E0913 00:11:41.759926 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.759953 kubelet[3144]: W0913 00:11:41.759938 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.759953 kubelet[3144]: E0913 00:11:41.759960 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.760288 kubelet[3144]: E0913 00:11:41.760203 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.760288 kubelet[3144]: W0913 00:11:41.760215 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.760288 kubelet[3144]: E0913 00:11:41.760228 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.760808 kubelet[3144]: E0913 00:11:41.760427 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.760808 kubelet[3144]: W0913 00:11:41.760446 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.760808 kubelet[3144]: E0913 00:11:41.760459 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.761064 kubelet[3144]: E0913 00:11:41.760829 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.761064 kubelet[3144]: W0913 00:11:41.760841 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.761064 kubelet[3144]: E0913 00:11:41.760857 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.761223 kubelet[3144]: E0913 00:11:41.761207 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.761223 kubelet[3144]: W0913 00:11:41.761220 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.761330 kubelet[3144]: E0913 00:11:41.761250 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.761618 kubelet[3144]: E0913 00:11:41.761490 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.761618 kubelet[3144]: W0913 00:11:41.761525 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.761618 kubelet[3144]: E0913 00:11:41.761554 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.761809 kubelet[3144]: E0913 00:11:41.761792 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.761809 kubelet[3144]: W0913 00:11:41.761806 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.761931 kubelet[3144]: E0913 00:11:41.761820 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.762076 kubelet[3144]: E0913 00:11:41.762061 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.762076 kubelet[3144]: W0913 00:11:41.762074 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.762076 kubelet[3144]: E0913 00:11:41.762087 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.762458 kubelet[3144]: E0913 00:11:41.762356 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.762458 kubelet[3144]: W0913 00:11:41.762368 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.762458 kubelet[3144]: E0913 00:11:41.762382 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.762679 kubelet[3144]: E0913 00:11:41.762582 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.762679 kubelet[3144]: W0913 00:11:41.762592 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.762679 kubelet[3144]: E0913 00:11:41.762604 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.763138 kubelet[3144]: E0913 00:11:41.762801 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.763138 kubelet[3144]: W0913 00:11:41.762814 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.763138 kubelet[3144]: E0913 00:11:41.762827 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.791782 kubelet[3144]: E0913 00:11:41.791619 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.791782 kubelet[3144]: W0913 00:11:41.791649 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.791782 kubelet[3144]: E0913 00:11:41.791675 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.792711 kubelet[3144]: E0913 00:11:41.792415 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.792711 kubelet[3144]: W0913 00:11:41.792433 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.792711 kubelet[3144]: E0913 00:11:41.792461 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.793080 kubelet[3144]: E0913 00:11:41.792947 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.793080 kubelet[3144]: W0913 00:11:41.792961 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.793080 kubelet[3144]: E0913 00:11:41.792982 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.794252 kubelet[3144]: E0913 00:11:41.793697 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.794252 kubelet[3144]: W0913 00:11:41.793813 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.794252 kubelet[3144]: E0913 00:11:41.794059 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.794902 kubelet[3144]: E0913 00:11:41.794550 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.794902 kubelet[3144]: W0913 00:11:41.794564 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.794902 kubelet[3144]: E0913 00:11:41.794632 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.797601 kubelet[3144]: E0913 00:11:41.797184 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.797601 kubelet[3144]: W0913 00:11:41.797202 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.797601 kubelet[3144]: E0913 00:11:41.797220 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.798165 kubelet[3144]: E0913 00:11:41.798097 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.798285 kubelet[3144]: W0913 00:11:41.798273 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.799254 kubelet[3144]: E0913 00:11:41.799237 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.800717 kubelet[3144]: E0913 00:11:41.799683 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.800717 kubelet[3144]: W0913 00:11:41.799699 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.800717 kubelet[3144]: E0913 00:11:41.800184 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.801249 kubelet[3144]: E0913 00:11:41.801211 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.801249 kubelet[3144]: W0913 00:11:41.801226 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.801774 kubelet[3144]: E0913 00:11:41.801674 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.801774 kubelet[3144]: W0913 00:11:41.801689 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.802411 kubelet[3144]: E0913 00:11:41.802197 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.802411 kubelet[3144]: E0913 00:11:41.802222 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.802722 kubelet[3144]: E0913 00:11:41.802566 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.802722 kubelet[3144]: W0913 00:11:41.802579 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.802722 kubelet[3144]: E0913 00:11:41.802611 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.804119 kubelet[3144]: E0913 00:11:41.803917 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.804119 kubelet[3144]: W0913 00:11:41.803934 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.804119 kubelet[3144]: E0913 00:11:41.803950 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.805408 kubelet[3144]: E0913 00:11:41.805109 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.805408 kubelet[3144]: W0913 00:11:41.805125 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.805408 kubelet[3144]: E0913 00:11:41.805234 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.806506 kubelet[3144]: E0913 00:11:41.806348 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.806506 kubelet[3144]: W0913 00:11:41.806363 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.806506 kubelet[3144]: E0913 00:11:41.806462 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.807189 kubelet[3144]: E0913 00:11:41.807061 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.807189 kubelet[3144]: W0913 00:11:41.807077 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.807189 kubelet[3144]: E0913 00:11:41.807118 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.808041 kubelet[3144]: E0913 00:11:41.807857 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.808041 kubelet[3144]: W0913 00:11:41.807872 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.808041 kubelet[3144]: E0913 00:11:41.807886 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.809427 kubelet[3144]: E0913 00:11:41.809273 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.809427 kubelet[3144]: W0913 00:11:41.809289 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.809427 kubelet[3144]: E0913 00:11:41.809309 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.809729 kubelet[3144]: E0913 00:11:41.809672 3144 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:11:41.809729 kubelet[3144]: W0913 00:11:41.809687 3144 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:11:41.809729 kubelet[3144]: E0913 00:11:41.809702 3144 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:11:41.936502 containerd[1691]: time="2025-09-13T00:11:41.936445381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:41.939435 containerd[1691]: time="2025-09-13T00:11:41.939217727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:11:41.942755 containerd[1691]: time="2025-09-13T00:11:41.942551281Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:41.949632 containerd[1691]: time="2025-09-13T00:11:41.949571896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:41.950690 containerd[1691]: time="2025-09-13T00:11:41.950145106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.236957064s" Sep 13 00:11:41.950690 containerd[1691]: time="2025-09-13T00:11:41.950188406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:11:41.953534 containerd[1691]: time="2025-09-13T00:11:41.953503561Z" level=info msg="CreateContainer within sandbox \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:11:41.986248 containerd[1691]: time="2025-09-13T00:11:41.986200796Z" level=info msg="CreateContainer within sandbox \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6\"" Sep 13 00:11:41.987108 containerd[1691]: time="2025-09-13T00:11:41.987072311Z" level=info msg="StartContainer for \"19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6\"" Sep 13 00:11:42.026236 systemd[1]: Started cri-containerd-19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6.scope - libcontainer container 19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6. Sep 13 00:11:42.066330 containerd[1691]: time="2025-09-13T00:11:42.066268108Z" level=info msg="StartContainer for \"19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6\" returns successfully" Sep 13 00:11:42.078202 systemd[1]: cri-containerd-19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6.scope: Deactivated successfully. Sep 13 00:11:42.723748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6-rootfs.mount: Deactivated successfully. Sep 13 00:11:43.535876 containerd[1691]: time="2025-09-13T00:11:43.535804783Z" level=info msg="shim disconnected" id=19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6 namespace=k8s.io Sep 13 00:11:43.535876 containerd[1691]: time="2025-09-13T00:11:43.535867984Z" level=warning msg="cleaning up after shim disconnected" id=19d56b26479bd24907ecb9df9eee867c9e27e1387e05d67e336ed33cdfb506c6 namespace=k8s.io Sep 13 00:11:43.535876 containerd[1691]: time="2025-09-13T00:11:43.535879084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:43.575435 kubelet[3144]: E0913 00:11:43.575376 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:43.701312 containerd[1691]: time="2025-09-13T00:11:43.701269893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:11:45.575068 kubelet[3144]: E0913 00:11:45.574944 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:47.575816 kubelet[3144]: E0913 00:11:47.575757 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:47.867785 containerd[1691]: time="2025-09-13T00:11:47.867641810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:47.871969 containerd[1691]: time="2025-09-13T00:11:47.871805678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:11:47.876193 containerd[1691]: time="2025-09-13T00:11:47.875906046Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:47.882047 containerd[1691]: time="2025-09-13T00:11:47.881280134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:47.882297 containerd[1691]: time="2025-09-13T00:11:47.882260750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.180943956s" Sep 13 00:11:47.882382 containerd[1691]: time="2025-09-13T00:11:47.882304551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:11:47.885859 containerd[1691]: time="2025-09-13T00:11:47.885818308Z" level=info msg="CreateContainer within sandbox \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:11:47.934279 containerd[1691]: time="2025-09-13T00:11:47.934232003Z" level=info msg="CreateContainer within sandbox \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36\"" Sep 13 00:11:47.936299 containerd[1691]: time="2025-09-13T00:11:47.934872213Z" level=info msg="StartContainer for \"b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36\"" Sep 13 00:11:47.969243 systemd[1]: Started cri-containerd-b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36.scope - libcontainer container b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36. Sep 13 00:11:48.008709 containerd[1691]: time="2025-09-13T00:11:48.007911212Z" level=info msg="StartContainer for \"b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36\" returns successfully" Sep 13 00:11:49.575717 kubelet[3144]: E0913 00:11:49.575653 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:49.655864 containerd[1691]: time="2025-09-13T00:11:49.655805555Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:11:49.659254 systemd[1]: cri-containerd-b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36.scope: Deactivated successfully. Sep 13 00:11:49.678854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36-rootfs.mount: Deactivated successfully. Sep 13 00:11:49.688236 kubelet[3144]: I0913 00:11:49.688197 3144 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:11:49.744483 systemd[1]: Created slice kubepods-burstable-pod7857ee55_1894_4eba_8cff_954726351357.slice - libcontainer container kubepods-burstable-pod7857ee55_1894_4eba_8cff_954726351357.slice. Sep 13 00:11:49.769145 systemd[1]: Created slice kubepods-burstable-pod4d37b03d_0dc2_4253_bf4e_944367e78e4b.slice - libcontainer container kubepods-burstable-pod4d37b03d_0dc2_4253_bf4e_944367e78e4b.slice. Sep 13 00:11:50.283645 kubelet[3144]: I0913 00:11:49.850402 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d37b03d-0dc2-4253-bf4e-944367e78e4b-config-volume\") pod \"coredns-668d6bf9bc-z8qzz\" (UID: \"4d37b03d-0dc2-4253-bf4e-944367e78e4b\") " pod="kube-system/coredns-668d6bf9bc-z8qzz" Sep 13 00:11:50.283645 kubelet[3144]: I0913 00:11:49.850439 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrst\" (UniqueName: \"kubernetes.io/projected/4d37b03d-0dc2-4253-bf4e-944367e78e4b-kube-api-access-6wrst\") pod \"coredns-668d6bf9bc-z8qzz\" (UID: \"4d37b03d-0dc2-4253-bf4e-944367e78e4b\") " pod="kube-system/coredns-668d6bf9bc-z8qzz" Sep 13 00:11:50.283645 kubelet[3144]: I0913 00:11:49.850459 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qpv\" (UniqueName: \"kubernetes.io/projected/52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3-kube-api-access-l9qpv\") pod \"calico-apiserver-6cc8c97746-j89mh\" (UID: \"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3\") " pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" Sep 13 00:11:50.283645 kubelet[3144]: I0913 00:11:49.850480 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe335ca4-a24a-461e-842a-3dde2493b4a1-tigera-ca-bundle\") pod \"calico-kube-controllers-6dc4fdd754-sspxr\" (UID: \"fe335ca4-a24a-461e-842a-3dde2493b4a1\") " pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" Sep 13 00:11:50.283645 kubelet[3144]: I0913 00:11:49.850501 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a04c4d5-b360-4717-b663-a87e97f493a4-calico-apiserver-certs\") pod \"calico-apiserver-6cc8c97746-nwqsn\" (UID: \"1a04c4d5-b360-4717-b663-a87e97f493a4\") " pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" Sep 13 00:11:49.783796 systemd[1]: Created slice kubepods-besteffort-podba7fa9d7_76c3_4c04_804c_9f129daebad5.slice - libcontainer container kubepods-besteffort-podba7fa9d7_76c3_4c04_804c_9f129daebad5.slice. Sep 13 00:11:50.284002 kubelet[3144]: I0913 00:11:49.850517 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqltl\" (UniqueName: \"kubernetes.io/projected/fe335ca4-a24a-461e-842a-3dde2493b4a1-kube-api-access-pqltl\") pod \"calico-kube-controllers-6dc4fdd754-sspxr\" (UID: \"fe335ca4-a24a-461e-842a-3dde2493b4a1\") " pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" Sep 13 00:11:50.284002 kubelet[3144]: I0913 00:11:49.850535 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-ca-bundle\") pod \"whisker-69c95d7488-drvxq\" (UID: \"8b8b36a7-d494-4570-bf45-9dab19166870\") " pod="calico-system/whisker-69c95d7488-drvxq" Sep 13 00:11:50.284002 kubelet[3144]: I0913 00:11:49.850552 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba7fa9d7-76c3-4c04-804c-9f129daebad5-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-dsfk4\" (UID: \"ba7fa9d7-76c3-4c04-804c-9f129daebad5\") " pod="calico-system/goldmane-54d579b49d-dsfk4" Sep 13 00:11:50.284002 kubelet[3144]: I0913 00:11:49.850572 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5x92\" (UniqueName: \"kubernetes.io/projected/1a04c4d5-b360-4717-b663-a87e97f493a4-kube-api-access-b5x92\") pod \"calico-apiserver-6cc8c97746-nwqsn\" (UID: \"1a04c4d5-b360-4717-b663-a87e97f493a4\") " pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" Sep 13 00:11:50.284002 kubelet[3144]: I0913 00:11:49.850592 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3-calico-apiserver-certs\") pod \"calico-apiserver-6cc8c97746-j89mh\" (UID: \"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3\") " pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" Sep 13 00:11:49.800480 systemd[1]: Created slice kubepods-besteffort-podfe335ca4_a24a_461e_842a_3dde2493b4a1.slice - libcontainer container kubepods-besteffort-podfe335ca4_a24a_461e_842a_3dde2493b4a1.slice. Sep 13 00:11:50.284329 kubelet[3144]: I0913 00:11:49.850608 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmpz\" (UniqueName: \"kubernetes.io/projected/7857ee55-1894-4eba-8cff-954726351357-kube-api-access-ftmpz\") pod \"coredns-668d6bf9bc-dhnxs\" (UID: \"7857ee55-1894-4eba-8cff-954726351357\") " pod="kube-system/coredns-668d6bf9bc-dhnxs" Sep 13 00:11:50.284329 kubelet[3144]: I0913 00:11:49.850622 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qg9\" (UniqueName: \"kubernetes.io/projected/8b8b36a7-d494-4570-bf45-9dab19166870-kube-api-access-p8qg9\") pod \"whisker-69c95d7488-drvxq\" (UID: \"8b8b36a7-d494-4570-bf45-9dab19166870\") " pod="calico-system/whisker-69c95d7488-drvxq" Sep 13 00:11:50.284329 kubelet[3144]: I0913 00:11:49.850644 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvch5\" (UniqueName: \"kubernetes.io/projected/ba7fa9d7-76c3-4c04-804c-9f129daebad5-kube-api-access-dvch5\") pod \"goldmane-54d579b49d-dsfk4\" (UID: \"ba7fa9d7-76c3-4c04-804c-9f129daebad5\") " pod="calico-system/goldmane-54d579b49d-dsfk4" Sep 13 00:11:50.284329 kubelet[3144]: I0913 00:11:49.850664 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7857ee55-1894-4eba-8cff-954726351357-config-volume\") pod \"coredns-668d6bf9bc-dhnxs\" (UID: \"7857ee55-1894-4eba-8cff-954726351357\") " pod="kube-system/coredns-668d6bf9bc-dhnxs" Sep 13 00:11:50.284329 kubelet[3144]: I0913 00:11:49.850681 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-backend-key-pair\") pod \"whisker-69c95d7488-drvxq\" (UID: \"8b8b36a7-d494-4570-bf45-9dab19166870\") " pod="calico-system/whisker-69c95d7488-drvxq" Sep 13 00:11:49.810160 systemd[1]: Created slice kubepods-besteffort-pod52a475e1_f54f_4d56_a7c8_53b7e0ab2cd3.slice - libcontainer container kubepods-besteffort-pod52a475e1_f54f_4d56_a7c8_53b7e0ab2cd3.slice. Sep 13 00:11:50.284612 kubelet[3144]: I0913 00:11:49.850696 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba7fa9d7-76c3-4c04-804c-9f129daebad5-config\") pod \"goldmane-54d579b49d-dsfk4\" (UID: \"ba7fa9d7-76c3-4c04-804c-9f129daebad5\") " pod="calico-system/goldmane-54d579b49d-dsfk4" Sep 13 00:11:50.284612 kubelet[3144]: I0913 00:11:49.850711 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ba7fa9d7-76c3-4c04-804c-9f129daebad5-goldmane-key-pair\") pod \"goldmane-54d579b49d-dsfk4\" (UID: \"ba7fa9d7-76c3-4c04-804c-9f129daebad5\") " pod="calico-system/goldmane-54d579b49d-dsfk4" Sep 13 00:11:49.817729 systemd[1]: Created slice kubepods-besteffort-pod8b8b36a7_d494_4570_bf45_9dab19166870.slice - libcontainer container kubepods-besteffort-pod8b8b36a7_d494_4570_bf45_9dab19166870.slice. Sep 13 00:11:49.826989 systemd[1]: Created slice kubepods-besteffort-pod1a04c4d5_b360_4717_b663_a87e97f493a4.slice - libcontainer container kubepods-besteffort-pod1a04c4d5_b360_4717_b663_a87e97f493a4.slice. Sep 13 00:11:50.589130 containerd[1691]: time="2025-09-13T00:11:50.588632164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhnxs,Uid:7857ee55-1894-4eba-8cff-954726351357,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:50.589130 containerd[1691]: time="2025-09-13T00:11:50.588630664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4fdd754-sspxr,Uid:fe335ca4-a24a-461e-842a-3dde2493b4a1,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:50.593055 containerd[1691]: time="2025-09-13T00:11:50.593014836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-nwqsn,Uid:1a04c4d5-b360-4717-b663-a87e97f493a4,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:50.594795 containerd[1691]: time="2025-09-13T00:11:50.594765865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-j89mh,Uid:52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:50.626628 containerd[1691]: time="2025-09-13T00:11:50.626191280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dsfk4,Uid:ba7fa9d7-76c3-4c04-804c-9f129daebad5,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:50.626628 containerd[1691]: time="2025-09-13T00:11:50.626410884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69c95d7488-drvxq,Uid:8b8b36a7-d494-4570-bf45-9dab19166870,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:50.626628 containerd[1691]: time="2025-09-13T00:11:50.626570887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z8qzz,Uid:4d37b03d-0dc2-4253-bf4e-944367e78e4b,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:50.934150 containerd[1691]: time="2025-09-13T00:11:50.934065333Z" level=info msg="shim disconnected" id=b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36 namespace=k8s.io Sep 13 00:11:50.934150 containerd[1691]: time="2025-09-13T00:11:50.934147534Z" level=warning msg="cleaning up after shim disconnected" id=b4b6364e7eac2327162d6d24a5510f45c5f88adc644f3bdb7bf4522d2b639d36 namespace=k8s.io Sep 13 00:11:50.934150 containerd[1691]: time="2025-09-13T00:11:50.934159634Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:51.309117 containerd[1691]: time="2025-09-13T00:11:51.308649680Z" level=error msg="Failed to destroy network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.309117 containerd[1691]: time="2025-09-13T00:11:51.309070187Z" level=error msg="encountered an error cleaning up failed sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.309311 containerd[1691]: time="2025-09-13T00:11:51.309141888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z8qzz,Uid:4d37b03d-0dc2-4253-bf4e-944367e78e4b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.309753 kubelet[3144]: E0913 00:11:51.309700 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.311187 kubelet[3144]: E0913 00:11:51.309792 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z8qzz" Sep 13 00:11:51.311187 kubelet[3144]: E0913 00:11:51.309818 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z8qzz" Sep 13 00:11:51.311187 kubelet[3144]: E0913 00:11:51.309874 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z8qzz_kube-system(4d37b03d-0dc2-4253-bf4e-944367e78e4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z8qzz_kube-system(4d37b03d-0dc2-4253-bf4e-944367e78e4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z8qzz" podUID="4d37b03d-0dc2-4253-bf4e-944367e78e4b" Sep 13 00:11:51.327775 containerd[1691]: time="2025-09-13T00:11:51.327566991Z" level=error msg="Failed to destroy network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.328679 containerd[1691]: time="2025-09-13T00:11:51.328506106Z" level=error msg="encountered an error cleaning up failed sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.328893 containerd[1691]: time="2025-09-13T00:11:51.328830911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dsfk4,Uid:ba7fa9d7-76c3-4c04-804c-9f129daebad5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.330360 kubelet[3144]: E0913 00:11:51.330264 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.330584 kubelet[3144]: E0913 00:11:51.330382 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-dsfk4" Sep 13 00:11:51.330584 kubelet[3144]: E0913 00:11:51.330412 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-dsfk4" Sep 13 00:11:51.330584 kubelet[3144]: E0913 00:11:51.330500 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-dsfk4_calico-system(ba7fa9d7-76c3-4c04-804c-9f129daebad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-dsfk4_calico-system(ba7fa9d7-76c3-4c04-804c-9f129daebad5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-dsfk4" podUID="ba7fa9d7-76c3-4c04-804c-9f129daebad5" Sep 13 00:11:51.332113 containerd[1691]: time="2025-09-13T00:11:51.331995963Z" level=error msg="Failed to destroy network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.332684 containerd[1691]: time="2025-09-13T00:11:51.332654674Z" level=error msg="encountered an error cleaning up failed sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.332889 containerd[1691]: time="2025-09-13T00:11:51.332838977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4fdd754-sspxr,Uid:fe335ca4-a24a-461e-842a-3dde2493b4a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.333363 kubelet[3144]: E0913 00:11:51.333328 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.333635 kubelet[3144]: E0913 00:11:51.333559 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" Sep 13 00:11:51.333635 kubelet[3144]: E0913 00:11:51.333600 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" Sep 13 00:11:51.333885 kubelet[3144]: E0913 00:11:51.333762 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc4fdd754-sspxr_calico-system(fe335ca4-a24a-461e-842a-3dde2493b4a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc4fdd754-sspxr_calico-system(fe335ca4-a24a-461e-842a-3dde2493b4a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" podUID="fe335ca4-a24a-461e-842a-3dde2493b4a1" Sep 13 00:11:51.358837 containerd[1691]: time="2025-09-13T00:11:51.358784803Z" level=error msg="Failed to destroy network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.359186 containerd[1691]: time="2025-09-13T00:11:51.359154109Z" level=error msg="encountered an error cleaning up failed sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.359321 containerd[1691]: time="2025-09-13T00:11:51.359217410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-j89mh,Uid:52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.360858 containerd[1691]: time="2025-09-13T00:11:51.359504515Z" level=error msg="Failed to destroy network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.360858 containerd[1691]: time="2025-09-13T00:11:51.359802620Z" level=error msg="encountered an error cleaning up failed sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.360858 containerd[1691]: time="2025-09-13T00:11:51.359849020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-nwqsn,Uid:1a04c4d5-b360-4717-b663-a87e97f493a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.361051 kubelet[3144]: E0913 00:11:51.359466 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.361051 kubelet[3144]: E0913 00:11:51.359534 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" Sep 13 00:11:51.361051 kubelet[3144]: E0913 00:11:51.359563 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" Sep 13 00:11:51.362598 kubelet[3144]: E0913 00:11:51.359636 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc8c97746-j89mh_calico-apiserver(52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc8c97746-j89mh_calico-apiserver(52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" podUID="52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3" Sep 13 00:11:51.362598 kubelet[3144]: E0913 00:11:51.362657 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.362598 kubelet[3144]: E0913 00:11:51.362743 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" Sep 13 00:11:51.363390 containerd[1691]: time="2025-09-13T00:11:51.362059657Z" level=error msg="Failed to destroy network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.363390 containerd[1691]: time="2025-09-13T00:11:51.363338878Z" level=error msg="encountered an error cleaning up failed sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.363525 kubelet[3144]: E0913 00:11:51.362874 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" Sep 13 00:11:51.363525 kubelet[3144]: E0913 00:11:51.363118 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cc8c97746-nwqsn_calico-apiserver(1a04c4d5-b360-4717-b663-a87e97f493a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cc8c97746-nwqsn_calico-apiserver(1a04c4d5-b360-4717-b663-a87e97f493a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" podUID="1a04c4d5-b360-4717-b663-a87e97f493a4" Sep 13 00:11:51.363789 containerd[1691]: time="2025-09-13T00:11:51.363404279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhnxs,Uid:7857ee55-1894-4eba-8cff-954726351357,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.364245 kubelet[3144]: E0913 00:11:51.364154 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.364470 kubelet[3144]: E0913 00:11:51.364307 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dhnxs" Sep 13 00:11:51.364470 kubelet[3144]: E0913 00:11:51.364335 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dhnxs" Sep 13 00:11:51.364470 kubelet[3144]: E0913 00:11:51.364382 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dhnxs_kube-system(7857ee55-1894-4eba-8cff-954726351357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dhnxs_kube-system(7857ee55-1894-4eba-8cff-954726351357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dhnxs" podUID="7857ee55-1894-4eba-8cff-954726351357" Sep 13 00:11:51.365266 containerd[1691]: time="2025-09-13T00:11:51.365227709Z" level=error msg="Failed to destroy network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.365594 containerd[1691]: time="2025-09-13T00:11:51.365562614Z" level=error msg="encountered an error cleaning up failed sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.365675 containerd[1691]: time="2025-09-13T00:11:51.365618515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69c95d7488-drvxq,Uid:8b8b36a7-d494-4570-bf45-9dab19166870,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.365834 kubelet[3144]: E0913 00:11:51.365790 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.365926 kubelet[3144]: E0913 00:11:51.365833 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69c95d7488-drvxq" Sep 13 00:11:51.365926 kubelet[3144]: E0913 00:11:51.365856 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69c95d7488-drvxq" Sep 13 00:11:51.365926 kubelet[3144]: E0913 00:11:51.365894 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69c95d7488-drvxq_calico-system(8b8b36a7-d494-4570-bf45-9dab19166870)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69c95d7488-drvxq_calico-system(8b8b36a7-d494-4570-bf45-9dab19166870)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69c95d7488-drvxq" podUID="8b8b36a7-d494-4570-bf45-9dab19166870" Sep 13 00:11:51.581182 systemd[1]: Created slice kubepods-besteffort-podcafccf29_04c8_4022_9a33_4b449e2cbfbb.slice - libcontainer container kubepods-besteffort-podcafccf29_04c8_4022_9a33_4b449e2cbfbb.slice. Sep 13 00:11:51.585521 containerd[1691]: time="2025-09-13T00:11:51.585478423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62xh,Uid:cafccf29-04c8-4022-9a33-4b449e2cbfbb,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:51.667380 containerd[1691]: time="2025-09-13T00:11:51.667320766Z" level=error msg="Failed to destroy network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.667692 containerd[1691]: time="2025-09-13T00:11:51.667660072Z" level=error msg="encountered an error cleaning up failed sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.667809 containerd[1691]: time="2025-09-13T00:11:51.667721573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62xh,Uid:cafccf29-04c8-4022-9a33-4b449e2cbfbb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.668046 kubelet[3144]: E0913 00:11:51.667985 3144 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.668214 kubelet[3144]: E0913 00:11:51.668053 3144 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:51.668214 kubelet[3144]: E0913 00:11:51.668081 3144 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x62xh" Sep 13 00:11:51.668631 kubelet[3144]: E0913 00:11:51.668160 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x62xh_calico-system(cafccf29-04c8-4022-9a33-4b449e2cbfbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x62xh_calico-system(cafccf29-04c8-4022-9a33-4b449e2cbfbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:51.718960 kubelet[3144]: I0913 00:11:51.718925 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:11:51.720088 containerd[1691]: time="2025-09-13T00:11:51.719806928Z" level=info msg="StopPodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\"" Sep 13 00:11:51.720088 containerd[1691]: time="2025-09-13T00:11:51.720049532Z" level=info msg="Ensure that sandbox 02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe in task-service has been cleanup successfully" Sep 13 00:11:51.724804 kubelet[3144]: I0913 00:11:51.723969 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:11:51.728752 containerd[1691]: time="2025-09-13T00:11:51.728710574Z" level=info msg="StopPodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\"" Sep 13 00:11:51.730593 containerd[1691]: time="2025-09-13T00:11:51.729073580Z" level=info msg="Ensure that sandbox 4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877 in task-service has been cleanup successfully" Sep 13 00:11:51.732673 kubelet[3144]: I0913 00:11:51.732647 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:11:51.735593 containerd[1691]: time="2025-09-13T00:11:51.735566386Z" level=info msg="StopPodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\"" Sep 13 00:11:51.737402 containerd[1691]: time="2025-09-13T00:11:51.737371716Z" level=info msg="Ensure that sandbox a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b in task-service has been cleanup successfully" Sep 13 00:11:51.741227 containerd[1691]: time="2025-09-13T00:11:51.741185079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:11:51.742391 kubelet[3144]: I0913 00:11:51.742367 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:11:51.744250 containerd[1691]: time="2025-09-13T00:11:51.744219228Z" level=info msg="StopPodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\"" Sep 13 00:11:51.745362 containerd[1691]: time="2025-09-13T00:11:51.745318246Z" level=info msg="Ensure that sandbox 47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73 in task-service has been cleanup successfully" Sep 13 00:11:51.751097 kubelet[3144]: I0913 00:11:51.751069 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:11:51.754825 containerd[1691]: time="2025-09-13T00:11:51.752898271Z" level=info msg="StopPodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\"" Sep 13 00:11:51.754825 containerd[1691]: time="2025-09-13T00:11:51.753163375Z" level=info msg="Ensure that sandbox a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1 in task-service has been cleanup successfully" Sep 13 00:11:51.767052 kubelet[3144]: I0913 00:11:51.763705 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:11:51.783644 containerd[1691]: time="2025-09-13T00:11:51.783393371Z" level=info msg="StopPodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\"" Sep 13 00:11:51.787085 containerd[1691]: time="2025-09-13T00:11:51.786982830Z" level=info msg="Ensure that sandbox 196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1 in task-service has been cleanup successfully" Sep 13 00:11:51.794381 kubelet[3144]: I0913 00:11:51.794350 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:11:51.796119 containerd[1691]: time="2025-09-13T00:11:51.796075379Z" level=info msg="StopPodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\"" Sep 13 00:11:51.797179 containerd[1691]: time="2025-09-13T00:11:51.796339084Z" level=info msg="Ensure that sandbox 60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e in task-service has been cleanup successfully" Sep 13 00:11:51.797877 kubelet[3144]: I0913 00:11:51.797842 3144 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:11:51.836778 containerd[1691]: time="2025-09-13T00:11:51.836632045Z" level=info msg="StopPodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\"" Sep 13 00:11:51.836923 containerd[1691]: time="2025-09-13T00:11:51.836873949Z" level=info msg="Ensure that sandbox 35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1 in task-service has been cleanup successfully" Sep 13 00:11:51.861839 containerd[1691]: time="2025-09-13T00:11:51.861783358Z" level=error msg="StopPodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" failed" error="failed to destroy network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.862332 kubelet[3144]: E0913 00:11:51.862284 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:11:51.862723 kubelet[3144]: E0913 00:11:51.862518 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b"} Sep 13 00:11:51.862723 kubelet[3144]: E0913 00:11:51.862624 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe335ca4-a24a-461e-842a-3dde2493b4a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.862723 kubelet[3144]: E0913 00:11:51.862688 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe335ca4-a24a-461e-842a-3dde2493b4a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" podUID="fe335ca4-a24a-461e-842a-3dde2493b4a1" Sep 13 00:11:51.888597 containerd[1691]: time="2025-09-13T00:11:51.888525897Z" level=error msg="StopPodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" failed" error="failed to destroy network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.889523 kubelet[3144]: E0913 00:11:51.889330 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:11:51.889523 kubelet[3144]: E0913 00:11:51.889396 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe"} Sep 13 00:11:51.889523 kubelet[3144]: E0913 00:11:51.889445 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba7fa9d7-76c3-4c04-804c-9f129daebad5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.889523 kubelet[3144]: E0913 00:11:51.889477 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba7fa9d7-76c3-4c04-804c-9f129daebad5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-dsfk4" podUID="ba7fa9d7-76c3-4c04-804c-9f129daebad5" Sep 13 00:11:51.896298 containerd[1691]: time="2025-09-13T00:11:51.896203123Z" level=error msg="StopPodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" failed" error="failed to destroy network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.896834 kubelet[3144]: E0913 00:11:51.896773 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:11:51.897200 kubelet[3144]: E0913 00:11:51.897172 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1"} Sep 13 00:11:51.897360 kubelet[3144]: E0913 00:11:51.897341 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d37b03d-0dc2-4253-bf4e-944367e78e4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.897534 kubelet[3144]: E0913 00:11:51.897508 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d37b03d-0dc2-4253-bf4e-944367e78e4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z8qzz" podUID="4d37b03d-0dc2-4253-bf4e-944367e78e4b" Sep 13 00:11:51.900616 containerd[1691]: time="2025-09-13T00:11:51.900242389Z" level=error msg="StopPodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" failed" error="failed to destroy network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.900717 kubelet[3144]: E0913 00:11:51.900458 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:11:51.900717 kubelet[3144]: E0913 00:11:51.900503 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877"} Sep 13 00:11:51.900717 kubelet[3144]: E0913 00:11:51.900542 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a04c4d5-b360-4717-b663-a87e97f493a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.900717 kubelet[3144]: E0913 00:11:51.900576 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a04c4d5-b360-4717-b663-a87e97f493a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" podUID="1a04c4d5-b360-4717-b663-a87e97f493a4" Sep 13 00:11:51.932057 containerd[1691]: time="2025-09-13T00:11:51.930707989Z" level=error msg="StopPodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" failed" error="failed to destroy network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.932607 kubelet[3144]: E0913 00:11:51.932421 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:11:51.932607 kubelet[3144]: E0913 00:11:51.932493 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73"} Sep 13 00:11:51.932607 kubelet[3144]: E0913 00:11:51.932541 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.932607 kubelet[3144]: E0913 00:11:51.932572 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cafccf29-04c8-4022-9a33-4b449e2cbfbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x62xh" podUID="cafccf29-04c8-4022-9a33-4b449e2cbfbb" Sep 13 00:11:51.936946 containerd[1691]: time="2025-09-13T00:11:51.936879390Z" level=error msg="StopPodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" failed" error="failed to destroy network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.937428 kubelet[3144]: E0913 00:11:51.937155 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:11:51.937428 kubelet[3144]: E0913 00:11:51.937212 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1"} Sep 13 00:11:51.937428 kubelet[3144]: E0913 00:11:51.937266 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b8b36a7-d494-4570-bf45-9dab19166870\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.937428 kubelet[3144]: E0913 00:11:51.937298 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b8b36a7-d494-4570-bf45-9dab19166870\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69c95d7488-drvxq" podUID="8b8b36a7-d494-4570-bf45-9dab19166870" Sep 13 00:11:51.946227 containerd[1691]: time="2025-09-13T00:11:51.946172243Z" level=error msg="StopPodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" failed" error="failed to destroy network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.946587 kubelet[3144]: E0913 00:11:51.946434 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:11:51.946587 kubelet[3144]: E0913 00:11:51.946494 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1"} Sep 13 00:11:51.946587 kubelet[3144]: E0913 00:11:51.946542 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7857ee55-1894-4eba-8cff-954726351357\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.946587 kubelet[3144]: E0913 00:11:51.946575 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7857ee55-1894-4eba-8cff-954726351357\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dhnxs" podUID="7857ee55-1894-4eba-8cff-954726351357" Sep 13 00:11:51.951069 containerd[1691]: time="2025-09-13T00:11:51.950999722Z" level=error msg="StopPodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" failed" error="failed to destroy network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:51.951311 kubelet[3144]: E0913 00:11:51.951266 3144 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:11:51.951399 kubelet[3144]: E0913 00:11:51.951375 3144 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e"} Sep 13 00:11:51.951447 kubelet[3144]: E0913 00:11:51.951426 3144 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:51.951533 kubelet[3144]: E0913 00:11:51.951460 3144 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" podUID="52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3" Sep 13 00:11:52.044068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe-shm.mount: Deactivated successfully. Sep 13 00:11:52.044216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e-shm.mount: Deactivated successfully. Sep 13 00:11:52.044315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877-shm.mount: Deactivated successfully. Sep 13 00:11:52.044412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b-shm.mount: Deactivated successfully. Sep 13 00:11:52.044511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1-shm.mount: Deactivated successfully. Sep 13 00:11:52.044605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1-shm.mount: Deactivated successfully. Sep 13 00:11:57.906435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319098100.mount: Deactivated successfully. Sep 13 00:11:57.949758 containerd[1691]: time="2025-09-13T00:11:57.949697950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:57.951956 containerd[1691]: time="2025-09-13T00:11:57.951888586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:11:57.955338 containerd[1691]: time="2025-09-13T00:11:57.955279341Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:57.966923 containerd[1691]: time="2025-09-13T00:11:57.966851529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:57.967582 containerd[1691]: time="2025-09-13T00:11:57.967539340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.226312761s" Sep 13 00:11:57.967853 containerd[1691]: time="2025-09-13T00:11:57.967729744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:11:57.986318 containerd[1691]: time="2025-09-13T00:11:57.986274546Z" level=info msg="CreateContainer within sandbox \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:11:58.024746 containerd[1691]: time="2025-09-13T00:11:58.024690971Z" level=info msg="CreateContainer within sandbox \"61852aeb815e58d4256c634531748a199593f1dd9e4f10bd25644cce8db2d298\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54\"" Sep 13 00:11:58.025537 containerd[1691]: time="2025-09-13T00:11:58.025349582Z" level=info msg="StartContainer for \"a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54\"" Sep 13 00:11:58.059263 systemd[1]: Started cri-containerd-a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54.scope - libcontainer container a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54. Sep 13 00:11:58.094402 containerd[1691]: time="2025-09-13T00:11:58.094345606Z" level=info msg="StartContainer for \"a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54\" returns successfully" Sep 13 00:11:58.642002 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:11:58.642180 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:11:58.774425 containerd[1691]: time="2025-09-13T00:11:58.774376483Z" level=info msg="StopPodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\"" Sep 13 00:11:58.875841 kubelet[3144]: I0913 00:11:58.875765 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qsxlr" podStartSLOduration=1.6097691969999999 podStartE2EDuration="21.875739834s" podCreationTimestamp="2025-09-13 00:11:37 +0000 UTC" firstStartedPulling="2025-09-13 00:11:37.70260882 +0000 UTC m=+27.261599077" lastFinishedPulling="2025-09-13 00:11:57.968579457 +0000 UTC m=+47.527569714" observedRunningTime="2025-09-13 00:11:58.873989605 +0000 UTC m=+48.432979962" watchObservedRunningTime="2025-09-13 00:11:58.875739834 +0000 UTC m=+48.434730091" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.891 [INFO][4356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.891 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" iface="eth0" netns="/var/run/netns/cni-0c4b5841-6f7f-61de-b592-73524629b765" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.892 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" iface="eth0" netns="/var/run/netns/cni-0c4b5841-6f7f-61de-b592-73524629b765" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.892 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" iface="eth0" netns="/var/run/netns/cni-0c4b5841-6f7f-61de-b592-73524629b765" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.892 [INFO][4356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.892 [INFO][4356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.951 [INFO][4383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.952 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.952 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.963 [WARNING][4383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.963 [INFO][4383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.966 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:58.974188 containerd[1691]: 2025-09-13 00:11:58.971 [INFO][4356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:11:58.977448 containerd[1691]: time="2025-09-13T00:11:58.974385541Z" level=info msg="TearDown network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" successfully" Sep 13 00:11:58.977448 containerd[1691]: time="2025-09-13T00:11:58.974435441Z" level=info msg="StopPodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" returns successfully" Sep 13 00:11:58.980800 systemd[1]: run-netns-cni\x2d0c4b5841\x2d6f7f\x2d61de\x2db592\x2d73524629b765.mount: Deactivated successfully. Sep 13 00:11:59.122455 kubelet[3144]: I0913 00:11:59.121291 3144 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-ca-bundle\") pod \"8b8b36a7-d494-4570-bf45-9dab19166870\" (UID: \"8b8b36a7-d494-4570-bf45-9dab19166870\") " Sep 13 00:11:59.122455 kubelet[3144]: I0913 00:11:59.121362 3144 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8qg9\" (UniqueName: \"kubernetes.io/projected/8b8b36a7-d494-4570-bf45-9dab19166870-kube-api-access-p8qg9\") pod \"8b8b36a7-d494-4570-bf45-9dab19166870\" (UID: \"8b8b36a7-d494-4570-bf45-9dab19166870\") " Sep 13 00:11:59.122455 kubelet[3144]: I0913 00:11:59.121396 3144 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-backend-key-pair\") pod \"8b8b36a7-d494-4570-bf45-9dab19166870\" (UID: \"8b8b36a7-d494-4570-bf45-9dab19166870\") " Sep 13 00:11:59.124415 kubelet[3144]: I0913 00:11:59.124365 3144 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b8b36a7-d494-4570-bf45-9dab19166870" (UID: "8b8b36a7-d494-4570-bf45-9dab19166870"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:11:59.131231 kubelet[3144]: I0913 00:11:59.131174 3144 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b8b36a7-d494-4570-bf45-9dab19166870" (UID: "8b8b36a7-d494-4570-bf45-9dab19166870"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:11:59.131385 kubelet[3144]: I0913 00:11:59.131365 3144 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8b36a7-d494-4570-bf45-9dab19166870-kube-api-access-p8qg9" (OuterVolumeSpecName: "kube-api-access-p8qg9") pod "8b8b36a7-d494-4570-bf45-9dab19166870" (UID: "8b8b36a7-d494-4570-bf45-9dab19166870"). InnerVolumeSpecName "kube-api-access-p8qg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:11:59.133133 systemd[1]: var-lib-kubelet-pods-8b8b36a7\x2dd494\x2d4570\x2dbf45\x2d9dab19166870-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp8qg9.mount: Deactivated successfully. Sep 13 00:11:59.133350 systemd[1]: var-lib-kubelet-pods-8b8b36a7\x2dd494\x2d4570\x2dbf45\x2d9dab19166870-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:11:59.222481 kubelet[3144]: I0913 00:11:59.222379 3144 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-ca-bundle\") on node \"ci-4081.3.5-n-e49e858a9f\" DevicePath \"\"" Sep 13 00:11:59.222481 kubelet[3144]: I0913 00:11:59.222421 3144 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8qg9\" (UniqueName: \"kubernetes.io/projected/8b8b36a7-d494-4570-bf45-9dab19166870-kube-api-access-p8qg9\") on node \"ci-4081.3.5-n-e49e858a9f\" DevicePath \"\"" Sep 13 00:11:59.222481 kubelet[3144]: I0913 00:11:59.222437 3144 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b8b36a7-d494-4570-bf45-9dab19166870-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-e49e858a9f\" DevicePath \"\"" Sep 13 00:11:59.850127 systemd[1]: Removed slice kubepods-besteffort-pod8b8b36a7_d494_4570_bf45_9dab19166870.slice - libcontainer container kubepods-besteffort-pod8b8b36a7_d494_4570_bf45_9dab19166870.slice. Sep 13 00:11:59.941046 systemd[1]: Created slice kubepods-besteffort-podac93daca_8728_4661_865a_312bbf6c05b9.slice - libcontainer container kubepods-besteffort-podac93daca_8728_4661_865a_312bbf6c05b9.slice. Sep 13 00:12:00.028178 kubelet[3144]: I0913 00:12:00.028116 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blngb\" (UniqueName: \"kubernetes.io/projected/ac93daca-8728-4661-865a-312bbf6c05b9-kube-api-access-blngb\") pod \"whisker-869864888c-zb7bt\" (UID: \"ac93daca-8728-4661-865a-312bbf6c05b9\") " pod="calico-system/whisker-869864888c-zb7bt" Sep 13 00:12:00.028178 kubelet[3144]: I0913 00:12:00.028192 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac93daca-8728-4661-865a-312bbf6c05b9-whisker-backend-key-pair\") pod \"whisker-869864888c-zb7bt\" (UID: \"ac93daca-8728-4661-865a-312bbf6c05b9\") " pod="calico-system/whisker-869864888c-zb7bt" Sep 13 00:12:00.028178 kubelet[3144]: I0913 00:12:00.028241 3144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac93daca-8728-4661-865a-312bbf6c05b9-whisker-ca-bundle\") pod \"whisker-869864888c-zb7bt\" (UID: \"ac93daca-8728-4661-865a-312bbf6c05b9\") " pod="calico-system/whisker-869864888c-zb7bt" Sep 13 00:12:00.246149 containerd[1691]: time="2025-09-13T00:12:00.245801650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869864888c-zb7bt,Uid:ac93daca-8728-4661-865a-312bbf6c05b9,Namespace:calico-system,Attempt:0,}" Sep 13 00:12:00.523677 systemd-networkd[1436]: cali66808e05ddd: Link UP Sep 13 00:12:00.524705 systemd-networkd[1436]: cali66808e05ddd: Gained carrier Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.355 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.373 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0 whisker-869864888c- calico-system ac93daca-8728-4661-865a-312bbf6c05b9 933 0 2025-09-13 00:11:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:869864888c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f whisker-869864888c-zb7bt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali66808e05ddd [] [] }} ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.373 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.425 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" HandleID="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.425 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" HandleID="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"whisker-869864888c-zb7bt", "timestamp":"2025-09-13 00:12:00.425581378 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.425 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.425 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.425 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.437 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.443 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.448 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.451 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.453 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.453 [INFO][4525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.455 [INFO][4525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58 Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.460 [INFO][4525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.471 [INFO][4525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.129/26] block=192.168.46.128/26 handle="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.471 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.129/26] handle="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.471 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:00.558206 containerd[1691]: 2025-09-13 00:12:00.471 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.129/26] IPv6=[] ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" HandleID="k8s-pod-network.ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.560360 containerd[1691]: 2025-09-13 00:12:00.475 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0", GenerateName:"whisker-869864888c-", Namespace:"calico-system", SelfLink:"", UID:"ac93daca-8728-4661-865a-312bbf6c05b9", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"869864888c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"whisker-869864888c-zb7bt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66808e05ddd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:00.560360 containerd[1691]: 2025-09-13 00:12:00.475 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.129/32] ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.560360 containerd[1691]: 2025-09-13 00:12:00.475 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66808e05ddd ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.560360 containerd[1691]: 2025-09-13 00:12:00.526 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.560360 containerd[1691]: 2025-09-13 00:12:00.527 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0", GenerateName:"whisker-869864888c-", Namespace:"calico-system", SelfLink:"", UID:"ac93daca-8728-4661-865a-312bbf6c05b9", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"869864888c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58", Pod:"whisker-869864888c-zb7bt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66808e05ddd", MAC:"22:a5:f5:5d:3e:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:00.560360 containerd[1691]: 2025-09-13 00:12:00.551 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58" Namespace="calico-system" Pod="whisker-869864888c-zb7bt" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--869864888c--zb7bt-eth0" Sep 13 00:12:00.585331 kubelet[3144]: I0913 00:12:00.583981 3144 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8b36a7-d494-4570-bf45-9dab19166870" path="/var/lib/kubelet/pods/8b8b36a7-d494-4570-bf45-9dab19166870/volumes" Sep 13 00:12:00.605678 containerd[1691]: time="2025-09-13T00:12:00.605550810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:00.605678 containerd[1691]: time="2025-09-13T00:12:00.605624311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:00.605678 containerd[1691]: time="2025-09-13T00:12:00.605644011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:00.606611 containerd[1691]: time="2025-09-13T00:12:00.605745413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:00.649256 systemd[1]: Started cri-containerd-ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58.scope - libcontainer container ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58. Sep 13 00:12:00.752185 containerd[1691]: time="2025-09-13T00:12:00.752046096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869864888c-zb7bt,Uid:ac93daca-8728-4661-865a-312bbf6c05b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58\"" Sep 13 00:12:00.756561 containerd[1691]: time="2025-09-13T00:12:00.756270665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:12:00.892054 kernel: bpftool[4622]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:12:01.233840 systemd-networkd[1436]: vxlan.calico: Link UP Sep 13 00:12:01.233851 systemd-networkd[1436]: vxlan.calico: Gained carrier Sep 13 00:12:01.695010 systemd-networkd[1436]: cali66808e05ddd: Gained IPv6LL Sep 13 00:12:02.009363 containerd[1691]: time="2025-09-13T00:12:02.008814166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:02.012039 containerd[1691]: time="2025-09-13T00:12:02.011972917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:12:02.016207 containerd[1691]: time="2025-09-13T00:12:02.016129485Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:02.021823 containerd[1691]: time="2025-09-13T00:12:02.021747076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:02.023694 containerd[1691]: time="2025-09-13T00:12:02.023095798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.266767732s" Sep 13 00:12:02.023694 containerd[1691]: time="2025-09-13T00:12:02.023145299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:12:02.026766 containerd[1691]: time="2025-09-13T00:12:02.026679757Z" level=info msg="CreateContainer within sandbox \"ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:12:02.064308 containerd[1691]: time="2025-09-13T00:12:02.064222168Z" level=info msg="CreateContainer within sandbox \"ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9678a64917af2ed08a3392d5657c2e1ecfa0a8c8245815ac24f244397102fa55\"" Sep 13 00:12:02.066155 containerd[1691]: time="2025-09-13T00:12:02.065608190Z" level=info msg="StartContainer for \"9678a64917af2ed08a3392d5657c2e1ecfa0a8c8245815ac24f244397102fa55\"" Sep 13 00:12:02.110223 systemd[1]: Started cri-containerd-9678a64917af2ed08a3392d5657c2e1ecfa0a8c8245815ac24f244397102fa55.scope - libcontainer container 9678a64917af2ed08a3392d5657c2e1ecfa0a8c8245815ac24f244397102fa55. Sep 13 00:12:02.160244 containerd[1691]: time="2025-09-13T00:12:02.159341117Z" level=info msg="StartContainer for \"9678a64917af2ed08a3392d5657c2e1ecfa0a8c8245815ac24f244397102fa55\" returns successfully" Sep 13 00:12:02.161464 containerd[1691]: time="2025-09-13T00:12:02.161431151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:12:02.334163 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Sep 13 00:12:02.581239 containerd[1691]: time="2025-09-13T00:12:02.581170486Z" level=info msg="StopPodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\"" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.632 [INFO][4750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.632 [INFO][4750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" iface="eth0" netns="/var/run/netns/cni-cc4c55c4-7c75-c22a-2674-47c8134eaacd" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.633 [INFO][4750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" iface="eth0" netns="/var/run/netns/cni-cc4c55c4-7c75-c22a-2674-47c8134eaacd" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.633 [INFO][4750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" iface="eth0" netns="/var/run/netns/cni-cc4c55c4-7c75-c22a-2674-47c8134eaacd" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.633 [INFO][4750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.633 [INFO][4750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.660 [INFO][4757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.660 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.661 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.666 [WARNING][4757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.666 [INFO][4757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.668 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:02.670663 containerd[1691]: 2025-09-13 00:12:02.669 [INFO][4750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:02.674076 containerd[1691]: time="2025-09-13T00:12:02.673106683Z" level=info msg="TearDown network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" successfully" Sep 13 00:12:02.674076 containerd[1691]: time="2025-09-13T00:12:02.673171284Z" level=info msg="StopPodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" returns successfully" Sep 13 00:12:02.676884 containerd[1691]: time="2025-09-13T00:12:02.676466837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-nwqsn,Uid:1a04c4d5-b360-4717-b663-a87e97f493a4,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:12:02.676760 systemd[1]: run-netns-cni\x2dcc4c55c4\x2d7c75\x2dc22a\x2d2674\x2d47c8134eaacd.mount: Deactivated successfully. Sep 13 00:12:02.825994 systemd-networkd[1436]: calied0c0cef1fd: Link UP Sep 13 00:12:02.826930 systemd-networkd[1436]: calied0c0cef1fd: Gained carrier Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.747 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0 calico-apiserver-6cc8c97746- calico-apiserver 1a04c4d5-b360-4717-b663-a87e97f493a4 950 0 2025-09-13 00:11:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc8c97746 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f calico-apiserver-6cc8c97746-nwqsn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied0c0cef1fd [] [] }} ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.747 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.774 [INFO][4776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" HandleID="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.774 [INFO][4776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" HandleID="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"calico-apiserver-6cc8c97746-nwqsn", "timestamp":"2025-09-13 00:12:02.774688037 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.774 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.774 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.775 [INFO][4776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.781 [INFO][4776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.791 [INFO][4776] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.798 [INFO][4776] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.800 [INFO][4776] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.802 [INFO][4776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.803 [INFO][4776] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.804 [INFO][4776] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61 Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.813 [INFO][4776] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.819 [INFO][4776] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.130/26] block=192.168.46.128/26 handle="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.819 [INFO][4776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.130/26] handle="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.819 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:02.853195 containerd[1691]: 2025-09-13 00:12:02.819 [INFO][4776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.130/26] IPv6=[] ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" HandleID="k8s-pod-network.3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.854180 containerd[1691]: 2025-09-13 00:12:02.822 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a04c4d5-b360-4717-b663-a87e97f493a4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"calico-apiserver-6cc8c97746-nwqsn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0c0cef1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:02.854180 containerd[1691]: 2025-09-13 00:12:02.822 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.130/32] ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.854180 containerd[1691]: 2025-09-13 00:12:02.822 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied0c0cef1fd ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.854180 containerd[1691]: 2025-09-13 00:12:02.827 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.854180 containerd[1691]: 2025-09-13 00:12:02.827 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a04c4d5-b360-4717-b663-a87e97f493a4", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61", Pod:"calico-apiserver-6cc8c97746-nwqsn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0c0cef1fd", MAC:"56:1d:bb:3f:ff:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:02.854180 containerd[1691]: 2025-09-13 00:12:02.851 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-nwqsn" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:02.884495 containerd[1691]: time="2025-09-13T00:12:02.884002017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:02.884495 containerd[1691]: time="2025-09-13T00:12:02.884120619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:02.884495 containerd[1691]: time="2025-09-13T00:12:02.884151719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:02.885440 containerd[1691]: time="2025-09-13T00:12:02.885210337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:02.915202 systemd[1]: Started cri-containerd-3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61.scope - libcontainer container 3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61. Sep 13 00:12:02.958741 containerd[1691]: time="2025-09-13T00:12:02.958699433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-nwqsn,Uid:1a04c4d5-b360-4717-b663-a87e97f493a4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61\"" Sep 13 00:12:03.576382 containerd[1691]: time="2025-09-13T00:12:03.576155088Z" level=info msg="StopPodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\"" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.626 [INFO][4839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.626 [INFO][4839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" iface="eth0" netns="/var/run/netns/cni-7442e406-e40a-5e5b-ff3c-316bc90eee16" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.626 [INFO][4839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" iface="eth0" netns="/var/run/netns/cni-7442e406-e40a-5e5b-ff3c-316bc90eee16" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.631 [INFO][4839] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" iface="eth0" netns="/var/run/netns/cni-7442e406-e40a-5e5b-ff3c-316bc90eee16" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.631 [INFO][4839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.631 [INFO][4839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.658 [INFO][4846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.658 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.658 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.669 [WARNING][4846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.669 [INFO][4846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.670 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:03.677771 containerd[1691]: 2025-09-13 00:12:03.676 [INFO][4839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:03.680185 containerd[1691]: time="2025-09-13T00:12:03.680131581Z" level=info msg="TearDown network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" successfully" Sep 13 00:12:03.680185 containerd[1691]: time="2025-09-13T00:12:03.680167781Z" level=info msg="StopPodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" returns successfully" Sep 13 00:12:03.680966 containerd[1691]: time="2025-09-13T00:12:03.680919994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-j89mh,Uid:52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:12:03.682607 systemd[1]: run-netns-cni\x2d7442e406\x2de40a\x2d5e5b\x2dff3c\x2d316bc90eee16.mount: Deactivated successfully. Sep 13 00:12:03.980675 systemd-networkd[1436]: calic601c119e29: Link UP Sep 13 00:12:03.983941 systemd-networkd[1436]: calic601c119e29: Gained carrier Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.835 [INFO][4856] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0 calico-apiserver-6cc8c97746- calico-apiserver 52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3 958 0 2025-09-13 00:11:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cc8c97746 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f calico-apiserver-6cc8c97746-j89mh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic601c119e29 [] [] }} ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.835 [INFO][4856] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.890 [INFO][4868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" HandleID="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.891 [INFO][4868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" HandleID="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"calico-apiserver-6cc8c97746-j89mh", "timestamp":"2025-09-13 00:12:03.887577359 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.891 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.893 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.893 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.905 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.915 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.924 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.927 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.933 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.933 [INFO][4868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.935 [INFO][4868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9 Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.949 [INFO][4868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.963 [INFO][4868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.131/26] block=192.168.46.128/26 handle="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.963 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.131/26] handle="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.963 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:04.021012 containerd[1691]: 2025-09-13 00:12:03.963 [INFO][4868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.131/26] IPv6=[] ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" HandleID="k8s-pod-network.497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.022065 containerd[1691]: 2025-09-13 00:12:03.971 [INFO][4856] cni-plugin/k8s.go 418: Populated endpoint ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"calico-apiserver-6cc8c97746-j89mh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic601c119e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:04.022065 containerd[1691]: 2025-09-13 00:12:03.971 [INFO][4856] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.131/32] ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.022065 containerd[1691]: 2025-09-13 00:12:03.971 [INFO][4856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic601c119e29 ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.022065 containerd[1691]: 2025-09-13 00:12:03.986 [INFO][4856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.022065 containerd[1691]: 2025-09-13 00:12:03.988 [INFO][4856] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9", Pod:"calico-apiserver-6cc8c97746-j89mh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic601c119e29", MAC:"3e:95:d3:c1:11:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:04.022065 containerd[1691]: 2025-09-13 00:12:04.012 [INFO][4856] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9" Namespace="calico-apiserver" Pod="calico-apiserver-6cc8c97746-j89mh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:04.085840 containerd[1691]: time="2025-09-13T00:12:04.085516782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:04.085840 containerd[1691]: time="2025-09-13T00:12:04.085582183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:04.085840 containerd[1691]: time="2025-09-13T00:12:04.085602883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:04.085840 containerd[1691]: time="2025-09-13T00:12:04.085713085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:04.144216 systemd[1]: Started cri-containerd-497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9.scope - libcontainer container 497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9. Sep 13 00:12:04.220001 containerd[1691]: time="2025-09-13T00:12:04.219697267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cc8c97746-j89mh,Uid:52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9\"" Sep 13 00:12:04.732267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774307482.mount: Deactivated successfully. Sep 13 00:12:04.781295 containerd[1691]: time="2025-09-13T00:12:04.781239911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:04.783663 containerd[1691]: time="2025-09-13T00:12:04.783495347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:12:04.787054 containerd[1691]: time="2025-09-13T00:12:04.786247692Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:04.790697 containerd[1691]: time="2025-09-13T00:12:04.790637464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:04.791968 containerd[1691]: time="2025-09-13T00:12:04.791301975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.62968912s" Sep 13 00:12:04.791968 containerd[1691]: time="2025-09-13T00:12:04.791369076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:12:04.794478 containerd[1691]: time="2025-09-13T00:12:04.793339808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:12:04.794837 containerd[1691]: time="2025-09-13T00:12:04.794802932Z" level=info msg="CreateContainer within sandbox \"ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:12:04.830977 systemd-networkd[1436]: calied0c0cef1fd: Gained IPv6LL Sep 13 00:12:04.839481 containerd[1691]: time="2025-09-13T00:12:04.839430558Z" level=info msg="CreateContainer within sandbox \"ee16483307b715342d8359d58b9e90ea2d88c9f454f0f9eb518cc56803b65a58\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"24b8b85c7ed654974f91c5e947d3193c812a32f2fe826cc5ba37e14def2f7cd8\"" Sep 13 00:12:04.841074 containerd[1691]: time="2025-09-13T00:12:04.840137170Z" level=info msg="StartContainer for \"24b8b85c7ed654974f91c5e947d3193c812a32f2fe826cc5ba37e14def2f7cd8\"" Sep 13 00:12:04.885206 systemd[1]: Started cri-containerd-24b8b85c7ed654974f91c5e947d3193c812a32f2fe826cc5ba37e14def2f7cd8.scope - libcontainer container 24b8b85c7ed654974f91c5e947d3193c812a32f2fe826cc5ba37e14def2f7cd8. Sep 13 00:12:04.935393 containerd[1691]: time="2025-09-13T00:12:04.935337020Z" level=info msg="StartContainer for \"24b8b85c7ed654974f91c5e947d3193c812a32f2fe826cc5ba37e14def2f7cd8\" returns successfully" Sep 13 00:12:05.577188 containerd[1691]: time="2025-09-13T00:12:05.575974952Z" level=info msg="StopPodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\"" Sep 13 00:12:05.662461 systemd-networkd[1436]: calic601c119e29: Gained IPv6LL Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.656 [INFO][4978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.658 [INFO][4978] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" iface="eth0" netns="/var/run/netns/cni-5ee4f1fd-47d1-3546-219e-60a57cec92aa" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.659 [INFO][4978] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" iface="eth0" netns="/var/run/netns/cni-5ee4f1fd-47d1-3546-219e-60a57cec92aa" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.661 [INFO][4978] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" iface="eth0" netns="/var/run/netns/cni-5ee4f1fd-47d1-3546-219e-60a57cec92aa" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.661 [INFO][4978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.661 [INFO][4978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.689 [INFO][4986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.689 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.689 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.695 [WARNING][4986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.695 [INFO][4986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.696 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:05.699970 containerd[1691]: 2025-09-13 00:12:05.698 [INFO][4978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:05.701836 containerd[1691]: time="2025-09-13T00:12:05.701132790Z" level=info msg="TearDown network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" successfully" Sep 13 00:12:05.701836 containerd[1691]: time="2025-09-13T00:12:05.701191291Z" level=info msg="StopPodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" returns successfully" Sep 13 00:12:05.704066 containerd[1691]: time="2025-09-13T00:12:05.703351726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z8qzz,Uid:4d37b03d-0dc2-4253-bf4e-944367e78e4b,Namespace:kube-system,Attempt:1,}" Sep 13 00:12:05.705349 systemd[1]: run-netns-cni\x2d5ee4f1fd\x2d47d1\x2d3546\x2d219e\x2d60a57cec92aa.mount: Deactivated successfully. Sep 13 00:12:05.884804 systemd-networkd[1436]: cali29989d2117b: Link UP Sep 13 00:12:05.888318 systemd-networkd[1436]: cali29989d2117b: Gained carrier Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.806 [INFO][4992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0 coredns-668d6bf9bc- kube-system 4d37b03d-0dc2-4253-bf4e-944367e78e4b 970 0 2025-09-13 00:11:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f coredns-668d6bf9bc-z8qzz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29989d2117b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.806 [INFO][4992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.830 [INFO][5004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" HandleID="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.830 [INFO][5004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" HandleID="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"coredns-668d6bf9bc-z8qzz", "timestamp":"2025-09-13 00:12:05.830252692 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.830 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.830 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.830 [INFO][5004] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.837 [INFO][5004] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.843 [INFO][5004] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.847 [INFO][5004] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.849 [INFO][5004] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.852 [INFO][5004] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.852 [INFO][5004] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.853 [INFO][5004] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.858 [INFO][5004] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.870 [INFO][5004] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.132/26] block=192.168.46.128/26 handle="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.870 [INFO][5004] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.132/26] handle="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.870 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:05.914434 containerd[1691]: 2025-09-13 00:12:05.870 [INFO][5004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.132/26] IPv6=[] ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" HandleID="k8s-pod-network.6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.916649 containerd[1691]: 2025-09-13 00:12:05.873 [INFO][4992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4d37b03d-0dc2-4253-bf4e-944367e78e4b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"coredns-668d6bf9bc-z8qzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29989d2117b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:05.916649 containerd[1691]: 2025-09-13 00:12:05.873 [INFO][4992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.132/32] ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.916649 containerd[1691]: 2025-09-13 00:12:05.873 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29989d2117b ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.916649 containerd[1691]: 2025-09-13 00:12:05.890 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.916649 containerd[1691]: 2025-09-13 00:12:05.890 [INFO][4992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4d37b03d-0dc2-4253-bf4e-944367e78e4b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b", Pod:"coredns-668d6bf9bc-z8qzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29989d2117b", MAC:"1e:5b:9c:87:32:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:05.916649 containerd[1691]: 2025-09-13 00:12:05.911 [INFO][4992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b" Namespace="kube-system" Pod="coredns-668d6bf9bc-z8qzz" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:05.921465 kubelet[3144]: I0913 00:12:05.921399 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-869864888c-zb7bt" podStartSLOduration=2.884702139 podStartE2EDuration="6.921377976s" podCreationTimestamp="2025-09-13 00:11:59 +0000 UTC" firstStartedPulling="2025-09-13 00:12:00.755569053 +0000 UTC m=+50.314559310" lastFinishedPulling="2025-09-13 00:12:04.79224479 +0000 UTC m=+54.351235147" observedRunningTime="2025-09-13 00:12:05.918896136 +0000 UTC m=+55.477886393" watchObservedRunningTime="2025-09-13 00:12:05.921377976 +0000 UTC m=+55.480368233" Sep 13 00:12:05.985619 containerd[1691]: time="2025-09-13T00:12:05.985522621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:05.985772 containerd[1691]: time="2025-09-13T00:12:05.985646723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:05.988862 containerd[1691]: time="2025-09-13T00:12:05.985678423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:05.988862 containerd[1691]: time="2025-09-13T00:12:05.987474053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:06.026306 systemd[1]: Started cri-containerd-6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b.scope - libcontainer container 6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b. Sep 13 00:12:06.095102 containerd[1691]: time="2025-09-13T00:12:06.095055804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z8qzz,Uid:4d37b03d-0dc2-4253-bf4e-944367e78e4b,Namespace:kube-system,Attempt:1,} returns sandbox id \"6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b\"" Sep 13 00:12:06.103149 containerd[1691]: time="2025-09-13T00:12:06.103104135Z" level=info msg="CreateContainer within sandbox \"6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:12:06.152065 containerd[1691]: time="2025-09-13T00:12:06.151293420Z" level=info msg="CreateContainer within sandbox \"6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adfafad414cacb3d2ca4adf41d0e56bf29aa7aac0b455b949cec0fff32775216\"" Sep 13 00:12:06.155844 containerd[1691]: time="2025-09-13T00:12:06.154902579Z" level=info msg="StartContainer for \"adfafad414cacb3d2ca4adf41d0e56bf29aa7aac0b455b949cec0fff32775216\"" Sep 13 00:12:06.204263 systemd[1]: Started cri-containerd-adfafad414cacb3d2ca4adf41d0e56bf29aa7aac0b455b949cec0fff32775216.scope - libcontainer container adfafad414cacb3d2ca4adf41d0e56bf29aa7aac0b455b949cec0fff32775216. Sep 13 00:12:06.283866 containerd[1691]: time="2025-09-13T00:12:06.283711676Z" level=info msg="StartContainer for \"adfafad414cacb3d2ca4adf41d0e56bf29aa7aac0b455b949cec0fff32775216\" returns successfully" Sep 13 00:12:06.579370 containerd[1691]: time="2025-09-13T00:12:06.577554461Z" level=info msg="StopPodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\"" Sep 13 00:12:06.579370 containerd[1691]: time="2025-09-13T00:12:06.578488076Z" level=info msg="StopPodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\"" Sep 13 00:12:06.708965 systemd[1]: run-containerd-runc-k8s.io-6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b-runc.0DfqTs.mount: Deactivated successfully. Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.701 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.701 [INFO][5119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" iface="eth0" netns="/var/run/netns/cni-58f27927-ecd2-622e-3022-daf4c71c603d" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.702 [INFO][5119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" iface="eth0" netns="/var/run/netns/cni-58f27927-ecd2-622e-3022-daf4c71c603d" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.702 [INFO][5119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" iface="eth0" netns="/var/run/netns/cni-58f27927-ecd2-622e-3022-daf4c71c603d" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.702 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.702 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.763 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.764 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.764 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.772 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.772 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.774 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:06.778583 containerd[1691]: 2025-09-13 00:12:06.776 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:06.782223 containerd[1691]: time="2025-09-13T00:12:06.781622684Z" level=info msg="TearDown network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" successfully" Sep 13 00:12:06.782223 containerd[1691]: time="2025-09-13T00:12:06.781663685Z" level=info msg="StopPodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" returns successfully" Sep 13 00:12:06.785601 systemd[1]: run-netns-cni\x2d58f27927\x2decd2\x2d622e\x2d3022\x2ddaf4c71c603d.mount: Deactivated successfully. Sep 13 00:12:06.786716 containerd[1691]: time="2025-09-13T00:12:06.786678567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62xh,Uid:cafccf29-04c8-4022-9a33-4b449e2cbfbb,Namespace:calico-system,Attempt:1,}" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.683 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.686 [INFO][5120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" iface="eth0" netns="/var/run/netns/cni-f050ce4b-83e7-7786-0984-878e193bc20e" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.688 [INFO][5120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" iface="eth0" netns="/var/run/netns/cni-f050ce4b-83e7-7786-0984-878e193bc20e" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.688 [INFO][5120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" iface="eth0" netns="/var/run/netns/cni-f050ce4b-83e7-7786-0984-878e193bc20e" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.688 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.688 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.772 [INFO][5133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.773 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.774 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.791 [WARNING][5133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.792 [INFO][5133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.793 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:06.803634 containerd[1691]: 2025-09-13 00:12:06.797 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:06.807238 containerd[1691]: time="2025-09-13T00:12:06.805261069Z" level=info msg="TearDown network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" successfully" Sep 13 00:12:06.807785 containerd[1691]: time="2025-09-13T00:12:06.807539206Z" level=info msg="StopPodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" returns successfully" Sep 13 00:12:06.811349 containerd[1691]: time="2025-09-13T00:12:06.809379536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4fdd754-sspxr,Uid:fe335ca4-a24a-461e-842a-3dde2493b4a1,Namespace:calico-system,Attempt:1,}" Sep 13 00:12:06.814386 systemd[1]: run-netns-cni\x2df050ce4b\x2d83e7\x2d7786\x2d0984\x2d878e193bc20e.mount: Deactivated successfully. Sep 13 00:12:06.937831 kubelet[3144]: I0913 00:12:06.936883 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z8qzz" podStartSLOduration=50.936861812 podStartE2EDuration="50.936861812s" podCreationTimestamp="2025-09-13 00:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:06.936242702 +0000 UTC m=+56.495232959" watchObservedRunningTime="2025-09-13 00:12:06.936861812 +0000 UTC m=+56.495852169" Sep 13 00:12:07.159226 systemd-networkd[1436]: cali13f88cfccee: Link UP Sep 13 00:12:07.164350 systemd-networkd[1436]: cali13f88cfccee: Gained carrier Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:06.922 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0 csi-node-driver- calico-system cafccf29-04c8-4022-9a33-4b449e2cbfbb 986 0 2025-09-13 00:11:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f csi-node-driver-x62xh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali13f88cfccee [] [] }} ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:06.927 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.056 [INFO][5183] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" HandleID="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.057 [INFO][5183] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" HandleID="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"csi-node-driver-x62xh", "timestamp":"2025-09-13 00:12:07.056630862 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.057 [INFO][5183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.057 [INFO][5183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.057 [INFO][5183] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.083 [INFO][5183] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.092 [INFO][5183] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.099 [INFO][5183] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.102 [INFO][5183] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.105 [INFO][5183] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.105 [INFO][5183] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.107 [INFO][5183] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357 Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.121 [INFO][5183] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.138 [INFO][5183] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.133/26] block=192.168.46.128/26 handle="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.138 [INFO][5183] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.133/26] handle="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.138 [INFO][5183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.201400 containerd[1691]: 2025-09-13 00:12:07.138 [INFO][5183] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.133/26] IPv6=[] ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" HandleID="k8s-pod-network.a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.202562 containerd[1691]: 2025-09-13 00:12:07.146 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cafccf29-04c8-4022-9a33-4b449e2cbfbb", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"csi-node-driver-x62xh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13f88cfccee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.202562 containerd[1691]: 2025-09-13 00:12:07.146 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.133/32] ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.202562 containerd[1691]: 2025-09-13 00:12:07.147 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13f88cfccee ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.202562 containerd[1691]: 2025-09-13 00:12:07.170 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.202562 containerd[1691]: 2025-09-13 00:12:07.170 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cafccf29-04c8-4022-9a33-4b449e2cbfbb", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357", Pod:"csi-node-driver-x62xh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13f88cfccee", MAC:"5e:27:a2:4c:e7:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.202562 containerd[1691]: 2025-09-13 00:12:07.194 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357" Namespace="calico-system" Pod="csi-node-driver-x62xh" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:07.277721 systemd-networkd[1436]: cali89c820a1c6b: Link UP Sep 13 00:12:07.283005 systemd-networkd[1436]: cali89c820a1c6b: Gained carrier Sep 13 00:12:07.303121 containerd[1691]: time="2025-09-13T00:12:07.302365964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:07.303121 containerd[1691]: time="2025-09-13T00:12:07.302449265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:07.303121 containerd[1691]: time="2025-09-13T00:12:07.302471465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:07.303121 containerd[1691]: time="2025-09-13T00:12:07.302611368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.003 [INFO][5170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0 calico-kube-controllers-6dc4fdd754- calico-system fe335ca4-a24a-461e-842a-3dde2493b4a1 985 0 2025-09-13 00:11:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dc4fdd754 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f calico-kube-controllers-6dc4fdd754-sspxr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali89c820a1c6b [] [] }} ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.003 [INFO][5170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.116 [INFO][5191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" HandleID="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.117 [INFO][5191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" HandleID="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"calico-kube-controllers-6dc4fdd754-sspxr", "timestamp":"2025-09-13 00:12:07.116181732 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.117 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.138 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.139 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.175 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.193 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.209 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.214 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.220 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.220 [INFO][5191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.222 [INFO][5191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.232 [INFO][5191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.248 [INFO][5191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.134/26] block=192.168.46.128/26 handle="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.248 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.134/26] handle="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.248 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.320363 containerd[1691]: 2025-09-13 00:12:07.248 [INFO][5191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.134/26] IPv6=[] ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" HandleID="k8s-pod-network.e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.321889 containerd[1691]: 2025-09-13 00:12:07.262 [INFO][5170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0", GenerateName:"calico-kube-controllers-6dc4fdd754-", Namespace:"calico-system", SelfLink:"", UID:"fe335ca4-a24a-461e-842a-3dde2493b4a1", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4fdd754", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"calico-kube-controllers-6dc4fdd754-sspxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89c820a1c6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.321889 containerd[1691]: 2025-09-13 00:12:07.263 [INFO][5170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.134/32] ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.321889 containerd[1691]: 2025-09-13 00:12:07.263 [INFO][5170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89c820a1c6b ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.321889 containerd[1691]: 2025-09-13 00:12:07.286 [INFO][5170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.321889 containerd[1691]: 2025-09-13 00:12:07.288 [INFO][5170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0", GenerateName:"calico-kube-controllers-6dc4fdd754-", Namespace:"calico-system", SelfLink:"", UID:"fe335ca4-a24a-461e-842a-3dde2493b4a1", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4fdd754", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e", Pod:"calico-kube-controllers-6dc4fdd754-sspxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89c820a1c6b", MAC:"1a:1d:63:fb:51:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:07.321889 containerd[1691]: 2025-09-13 00:12:07.316 [INFO][5170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e" Namespace="calico-system" Pod="calico-kube-controllers-6dc4fdd754-sspxr" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:07.363887 systemd[1]: Started cri-containerd-a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357.scope - libcontainer container a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357. Sep 13 00:12:07.412578 containerd[1691]: time="2025-09-13T00:12:07.411936148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:07.413415 containerd[1691]: time="2025-09-13T00:12:07.413012165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:07.413415 containerd[1691]: time="2025-09-13T00:12:07.413077167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:07.415411 containerd[1691]: time="2025-09-13T00:12:07.414807595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:07.471449 systemd[1]: Started cri-containerd-e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e.scope - libcontainer container e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e. Sep 13 00:12:07.492844 containerd[1691]: time="2025-09-13T00:12:07.492104953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x62xh,Uid:cafccf29-04c8-4022-9a33-4b449e2cbfbb,Namespace:calico-system,Attempt:1,} returns sandbox id \"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357\"" Sep 13 00:12:07.578398 containerd[1691]: time="2025-09-13T00:12:07.577703347Z" level=info msg="StopPodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\"" Sep 13 00:12:07.578398 containerd[1691]: time="2025-09-13T00:12:07.578337558Z" level=info msg="StopPodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\"" Sep 13 00:12:07.601729 containerd[1691]: time="2025-09-13T00:12:07.601681838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4fdd754-sspxr,Uid:fe335ca4-a24a-461e-842a-3dde2493b4a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e\"" Sep 13 00:12:07.774441 systemd-networkd[1436]: cali29989d2117b: Gained IPv6LL Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.676 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.676 [INFO][5314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" iface="eth0" netns="/var/run/netns/cni-f94aa388-9e2b-a880-07b2-559cc784e414" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.677 [INFO][5314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" iface="eth0" netns="/var/run/netns/cni-f94aa388-9e2b-a880-07b2-559cc784e414" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.678 [INFO][5314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" iface="eth0" netns="/var/run/netns/cni-f94aa388-9e2b-a880-07b2-559cc784e414" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.678 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.678 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.771 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.771 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.773 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.787 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.787 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.790 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.794565 containerd[1691]: 2025-09-13 00:12:07.792 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:07.795652 containerd[1691]: time="2025-09-13T00:12:07.795614996Z" level=info msg="TearDown network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" successfully" Sep 13 00:12:07.795774 containerd[1691]: time="2025-09-13T00:12:07.795756698Z" level=info msg="StopPodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" returns successfully" Sep 13 00:12:07.797366 containerd[1691]: time="2025-09-13T00:12:07.797334524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhnxs,Uid:7857ee55-1894-4eba-8cff-954726351357,Namespace:kube-system,Attempt:1,}" Sep 13 00:12:07.802911 systemd[1]: run-netns-cni\x2df94aa388\x2d9e2b\x2da880\x2d07b2\x2d559cc784e414.mount: Deactivated successfully. Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.748 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.751 [INFO][5318] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" iface="eth0" netns="/var/run/netns/cni-3e05394c-8ab6-df87-43e1-0997ea7ca9d1" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.753 [INFO][5318] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" iface="eth0" netns="/var/run/netns/cni-3e05394c-8ab6-df87-43e1-0997ea7ca9d1" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.753 [INFO][5318] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" iface="eth0" netns="/var/run/netns/cni-3e05394c-8ab6-df87-43e1-0997ea7ca9d1" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.753 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.753 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.812 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.812 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.813 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.831 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.831 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.837 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:07.853278 containerd[1691]: 2025-09-13 00:12:07.848 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:07.856176 containerd[1691]: time="2025-09-13T00:12:07.854108948Z" level=info msg="TearDown network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" successfully" Sep 13 00:12:07.856176 containerd[1691]: time="2025-09-13T00:12:07.854143849Z" level=info msg="StopPodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" returns successfully" Sep 13 00:12:07.857258 systemd[1]: run-netns-cni\x2d3e05394c\x2d8ab6\x2ddf87\x2d43e1\x2d0997ea7ca9d1.mount: Deactivated successfully. Sep 13 00:12:07.859752 containerd[1691]: time="2025-09-13T00:12:07.859596037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dsfk4,Uid:ba7fa9d7-76c3-4c04-804c-9f129daebad5,Namespace:calico-system,Attempt:1,}" Sep 13 00:12:08.059829 systemd-networkd[1436]: calie23d15e2843: Link UP Sep 13 00:12:08.061590 systemd-networkd[1436]: calie23d15e2843: Gained carrier Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.920 [INFO][5346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0 coredns-668d6bf9bc- kube-system 7857ee55-1894-4eba-8cff-954726351357 1007 0 2025-09-13 00:11:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f coredns-668d6bf9bc-dhnxs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie23d15e2843 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.920 [INFO][5346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.974 [INFO][5369] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" HandleID="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.974 [INFO][5369] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" HandleID="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"coredns-668d6bf9bc-dhnxs", "timestamp":"2025-09-13 00:12:07.974118402 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.974 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.974 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.974 [INFO][5369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.986 [INFO][5369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:07.993 [INFO][5369] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.001 [INFO][5369] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.003 [INFO][5369] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.007 [INFO][5369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.007 [INFO][5369] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.010 [INFO][5369] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.019 [INFO][5369] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.038 [INFO][5369] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.135/26] block=192.168.46.128/26 handle="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.038 [INFO][5369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.135/26] handle="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.038 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:08.103050 containerd[1691]: 2025-09-13 00:12:08.038 [INFO][5369] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.135/26] IPv6=[] ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" HandleID="k8s-pod-network.48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.104068 containerd[1691]: 2025-09-13 00:12:08.045 [INFO][5346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7857ee55-1894-4eba-8cff-954726351357", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"coredns-668d6bf9bc-dhnxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie23d15e2843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:08.104068 containerd[1691]: 2025-09-13 00:12:08.045 [INFO][5346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.135/32] ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.104068 containerd[1691]: 2025-09-13 00:12:08.045 [INFO][5346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie23d15e2843 ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.104068 containerd[1691]: 2025-09-13 00:12:08.065 [INFO][5346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.104068 containerd[1691]: 2025-09-13 00:12:08.066 [INFO][5346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7857ee55-1894-4eba-8cff-954726351357", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f", Pod:"coredns-668d6bf9bc-dhnxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie23d15e2843", MAC:"66:48:a4:f5:2c:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:08.104068 containerd[1691]: 2025-09-13 00:12:08.091 [INFO][5346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f" Namespace="kube-system" Pod="coredns-668d6bf9bc-dhnxs" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:08.172352 containerd[1691]: time="2025-09-13T00:12:08.172248029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:08.173700 containerd[1691]: time="2025-09-13T00:12:08.172331830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:08.173700 containerd[1691]: time="2025-09-13T00:12:08.173201844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:08.174962 containerd[1691]: time="2025-09-13T00:12:08.174564966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:08.205540 systemd[1]: Started cri-containerd-48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f.scope - libcontainer container 48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f. Sep 13 00:12:08.223078 systemd-networkd[1436]: cali8ab473c141d: Link UP Sep 13 00:12:08.224893 systemd-networkd[1436]: cali13f88cfccee: Gained IPv6LL Sep 13 00:12:08.227140 systemd-networkd[1436]: cali8ab473c141d: Gained carrier Sep 13 00:12:08.317239 containerd[1691]: time="2025-09-13T00:12:08.316737881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dhnxs,Uid:7857ee55-1894-4eba-8cff-954726351357,Namespace:kube-system,Attempt:1,} returns sandbox id \"48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f\"" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.023 [INFO][5355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0 goldmane-54d579b49d- calico-system ba7fa9d7-76c3-4c04-804c-9f129daebad5 1008 0 2025-09-13 00:11:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-e49e858a9f goldmane-54d579b49d-dsfk4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8ab473c141d [] [] }} ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.023 [INFO][5355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.114 [INFO][5379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" HandleID="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.114 [INFO][5379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" HandleID="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fb40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-e49e858a9f", "pod":"goldmane-54d579b49d-dsfk4", "timestamp":"2025-09-13 00:12:08.114269884 +0000 UTC"}, Hostname:"ci-4081.3.5-n-e49e858a9f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.115 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.115 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.115 [INFO][5379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-e49e858a9f' Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.131 [INFO][5379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.146 [INFO][5379] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.157 [INFO][5379] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.162 [INFO][5379] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.168 [INFO][5379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.168 [INFO][5379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.173 [INFO][5379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2 Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.183 [INFO][5379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.205 [INFO][5379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.136/26] block=192.168.46.128/26 handle="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.207 [INFO][5379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.136/26] handle="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" host="ci-4081.3.5-n-e49e858a9f" Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.207 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:08.329649 containerd[1691]: 2025-09-13 00:12:08.208 [INFO][5379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.136/26] IPv6=[] ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" HandleID="k8s-pod-network.9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.331861 containerd[1691]: 2025-09-13 00:12:08.211 [INFO][5355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ba7fa9d7-76c3-4c04-804c-9f129daebad5", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"", Pod:"goldmane-54d579b49d-dsfk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ab473c141d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:08.331861 containerd[1691]: 2025-09-13 00:12:08.212 [INFO][5355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.136/32] ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.331861 containerd[1691]: 2025-09-13 00:12:08.212 [INFO][5355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ab473c141d ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.331861 containerd[1691]: 2025-09-13 00:12:08.227 [INFO][5355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.331861 containerd[1691]: 2025-09-13 00:12:08.230 [INFO][5355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ba7fa9d7-76c3-4c04-804c-9f129daebad5", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2", Pod:"goldmane-54d579b49d-dsfk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ab473c141d", MAC:"f2:ba:95:68:e0:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:08.331861 containerd[1691]: 2025-09-13 00:12:08.309 [INFO][5355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2" Namespace="calico-system" Pod="goldmane-54d579b49d-dsfk4" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:08.347974 containerd[1691]: time="2025-09-13T00:12:08.347833188Z" level=info msg="CreateContainer within sandbox \"48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:12:08.420619 containerd[1691]: time="2025-09-13T00:12:08.419267451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:12:08.420619 containerd[1691]: time="2025-09-13T00:12:08.419343952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:12:08.420619 containerd[1691]: time="2025-09-13T00:12:08.419364653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:08.420619 containerd[1691]: time="2025-09-13T00:12:08.419462054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:12:08.431300 containerd[1691]: time="2025-09-13T00:12:08.431243246Z" level=info msg="CreateContainer within sandbox \"48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"702413afca5d6c57f77563f230caa359f5de959c511f9ad318d89e62ac68b952\"" Sep 13 00:12:08.432866 containerd[1691]: time="2025-09-13T00:12:08.432818472Z" level=info msg="StartContainer for \"702413afca5d6c57f77563f230caa359f5de959c511f9ad318d89e62ac68b952\"" Sep 13 00:12:08.457558 systemd[1]: Started cri-containerd-9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2.scope - libcontainer container 9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2. Sep 13 00:12:08.524424 systemd[1]: Started cri-containerd-702413afca5d6c57f77563f230caa359f5de959c511f9ad318d89e62ac68b952.scope - libcontainer container 702413afca5d6c57f77563f230caa359f5de959c511f9ad318d89e62ac68b952. Sep 13 00:12:08.650375 containerd[1691]: time="2025-09-13T00:12:08.648600785Z" level=info msg="StartContainer for \"702413afca5d6c57f77563f230caa359f5de959c511f9ad318d89e62ac68b952\" returns successfully" Sep 13 00:12:08.671056 systemd-networkd[1436]: cali89c820a1c6b: Gained IPv6LL Sep 13 00:12:08.883407 containerd[1691]: time="2025-09-13T00:12:08.883353908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dsfk4,Uid:ba7fa9d7-76c3-4c04-804c-9f129daebad5,Namespace:calico-system,Attempt:1,} returns sandbox id \"9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2\"" Sep 13 00:12:08.956741 kubelet[3144]: I0913 00:12:08.956552 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dhnxs" podStartSLOduration=52.956512799 podStartE2EDuration="52.956512799s" podCreationTimestamp="2025-09-13 00:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:12:08.953872956 +0000 UTC m=+58.512863313" watchObservedRunningTime="2025-09-13 00:12:08.956512799 +0000 UTC m=+58.515503056" Sep 13 00:12:09.213464 containerd[1691]: time="2025-09-13T00:12:09.213138378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:09.215932 containerd[1691]: time="2025-09-13T00:12:09.215874423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:12:09.221675 containerd[1691]: time="2025-09-13T00:12:09.221629316Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:09.226792 containerd[1691]: time="2025-09-13T00:12:09.225723883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:09.226792 containerd[1691]: time="2025-09-13T00:12:09.226610097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.433231489s" Sep 13 00:12:09.226792 containerd[1691]: time="2025-09-13T00:12:09.226654698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:12:09.228036 containerd[1691]: time="2025-09-13T00:12:09.227985220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:12:09.230126 containerd[1691]: time="2025-09-13T00:12:09.230082254Z" level=info msg="CreateContainer within sandbox \"3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:12:09.261269 containerd[1691]: time="2025-09-13T00:12:09.261222461Z" level=info msg="CreateContainer within sandbox \"3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d239497ba5dd5dcc08bf0401a5987aaa1483ffa3bde461235388889b99a80560\"" Sep 13 00:12:09.262962 containerd[1691]: time="2025-09-13T00:12:09.261841571Z" level=info msg="StartContainer for \"d239497ba5dd5dcc08bf0401a5987aaa1483ffa3bde461235388889b99a80560\"" Sep 13 00:12:09.304244 systemd[1]: Started cri-containerd-d239497ba5dd5dcc08bf0401a5987aaa1483ffa3bde461235388889b99a80560.scope - libcontainer container d239497ba5dd5dcc08bf0401a5987aaa1483ffa3bde461235388889b99a80560. Sep 13 00:12:09.370342 containerd[1691]: time="2025-09-13T00:12:09.370285037Z" level=info msg="StartContainer for \"d239497ba5dd5dcc08bf0401a5987aaa1483ffa3bde461235388889b99a80560\" returns successfully" Sep 13 00:12:09.523378 containerd[1691]: time="2025-09-13T00:12:09.521965707Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:09.525035 containerd[1691]: time="2025-09-13T00:12:09.524961756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:12:09.527296 containerd[1691]: time="2025-09-13T00:12:09.527153291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 299.124671ms" Sep 13 00:12:09.527296 containerd[1691]: time="2025-09-13T00:12:09.527200892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:12:09.529496 containerd[1691]: time="2025-09-13T00:12:09.529286226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:12:09.530736 containerd[1691]: time="2025-09-13T00:12:09.530487646Z" level=info msg="CreateContainer within sandbox \"497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:12:09.560376 containerd[1691]: time="2025-09-13T00:12:09.560325031Z" level=info msg="CreateContainer within sandbox \"497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e2f8d3f82965c6ac52fe35cfe9537f8d222e8ecfb8776822042f347f817cb80\"" Sep 13 00:12:09.562766 containerd[1691]: time="2025-09-13T00:12:09.562719570Z" level=info msg="StartContainer for \"0e2f8d3f82965c6ac52fe35cfe9537f8d222e8ecfb8776822042f347f817cb80\"" Sep 13 00:12:09.601257 systemd[1]: Started cri-containerd-0e2f8d3f82965c6ac52fe35cfe9537f8d222e8ecfb8776822042f347f817cb80.scope - libcontainer container 0e2f8d3f82965c6ac52fe35cfe9537f8d222e8ecfb8776822042f347f817cb80. Sep 13 00:12:09.667172 containerd[1691]: time="2025-09-13T00:12:09.666344358Z" level=info msg="StartContainer for \"0e2f8d3f82965c6ac52fe35cfe9537f8d222e8ecfb8776822042f347f817cb80\" returns successfully" Sep 13 00:12:09.823167 systemd-networkd[1436]: calie23d15e2843: Gained IPv6LL Sep 13 00:12:09.950211 systemd-networkd[1436]: cali8ab473c141d: Gained IPv6LL Sep 13 00:12:09.972446 kubelet[3144]: I0913 00:12:09.972366 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc8c97746-j89mh" podStartSLOduration=31.668853195 podStartE2EDuration="36.972342455s" podCreationTimestamp="2025-09-13 00:11:33 +0000 UTC" firstStartedPulling="2025-09-13 00:12:04.224771149 +0000 UTC m=+53.783761406" lastFinishedPulling="2025-09-13 00:12:09.528260409 +0000 UTC m=+59.087250666" observedRunningTime="2025-09-13 00:12:09.971804646 +0000 UTC m=+59.530794903" watchObservedRunningTime="2025-09-13 00:12:09.972342455 +0000 UTC m=+59.531332812" Sep 13 00:12:09.991939 kubelet[3144]: I0913 00:12:09.991849 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cc8c97746-nwqsn" podStartSLOduration=30.724425517 podStartE2EDuration="36.991824873s" podCreationTimestamp="2025-09-13 00:11:33 +0000 UTC" firstStartedPulling="2025-09-13 00:12:02.960419261 +0000 UTC m=+52.519409518" lastFinishedPulling="2025-09-13 00:12:09.227818517 +0000 UTC m=+58.786808874" observedRunningTime="2025-09-13 00:12:09.990530252 +0000 UTC m=+59.549520609" watchObservedRunningTime="2025-09-13 00:12:09.991824873 +0000 UTC m=+59.550815230" Sep 13 00:12:10.615639 containerd[1691]: time="2025-09-13T00:12:10.615561270Z" level=info msg="StopPodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\"" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.733 [WARNING][5625] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0", GenerateName:"calico-kube-controllers-6dc4fdd754-", Namespace:"calico-system", SelfLink:"", UID:"fe335ca4-a24a-461e-842a-3dde2493b4a1", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4fdd754", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e", Pod:"calico-kube-controllers-6dc4fdd754-sspxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89c820a1c6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.734 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.734 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" iface="eth0" netns="" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.734 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.734 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.792 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.792 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.793 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.804 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.804 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.807 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:10.817688 containerd[1691]: 2025-09-13 00:12:10.812 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:10.817688 containerd[1691]: time="2025-09-13T00:12:10.816754359Z" level=info msg="TearDown network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" successfully" Sep 13 00:12:10.817688 containerd[1691]: time="2025-09-13T00:12:10.816785759Z" level=info msg="StopPodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" returns successfully" Sep 13 00:12:10.823140 containerd[1691]: time="2025-09-13T00:12:10.820767725Z" level=info msg="RemovePodSandbox for \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\"" Sep 13 00:12:10.823140 containerd[1691]: time="2025-09-13T00:12:10.820832026Z" level=info msg="Forcibly stopping sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\"" Sep 13 00:12:10.959783 kubelet[3144]: I0913 00:12:10.959738 3144 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:10.961040 kubelet[3144]: I0913 00:12:10.960507 3144 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.026 [WARNING][5651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0", GenerateName:"calico-kube-controllers-6dc4fdd754-", Namespace:"calico-system", SelfLink:"", UID:"fe335ca4-a24a-461e-842a-3dde2493b4a1", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4fdd754", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e", Pod:"calico-kube-controllers-6dc4fdd754-sspxr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali89c820a1c6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.027 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.027 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" iface="eth0" netns="" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.027 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.027 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.090 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.090 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.090 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.103 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.103 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" HandleID="k8s-pod-network.a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--kube--controllers--6dc4fdd754--sspxr-eth0" Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.105 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:11.111313 containerd[1691]: 2025-09-13 00:12:11.107 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b" Sep 13 00:12:11.111313 containerd[1691]: time="2025-09-13T00:12:11.111302574Z" level=info msg="TearDown network for sandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" successfully" Sep 13 00:12:11.120981 containerd[1691]: time="2025-09-13T00:12:11.120788329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:11.121592 containerd[1691]: time="2025-09-13T00:12:11.121552842Z" level=info msg="RemovePodSandbox \"a7936a441df3690b0e2a64fea2efddcd271fdd507b93298be243aded9237025b\" returns successfully" Sep 13 00:12:11.123728 containerd[1691]: time="2025-09-13T00:12:11.123684677Z" level=info msg="StopPodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\"" Sep 13 00:12:11.231124 containerd[1691]: time="2025-09-13T00:12:11.230822028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:11.234718 containerd[1691]: time="2025-09-13T00:12:11.234647391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:12:11.238586 containerd[1691]: time="2025-09-13T00:12:11.238458753Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:11.259157 containerd[1691]: time="2025-09-13T00:12:11.258842886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.729515959s" Sep 13 00:12:11.259157 containerd[1691]: time="2025-09-13T00:12:11.258897387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:12:11.261233 containerd[1691]: time="2025-09-13T00:12:11.259411096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:11.263240 containerd[1691]: time="2025-09-13T00:12:11.263206958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:12:11.268578 containerd[1691]: time="2025-09-13T00:12:11.268533845Z" level=info msg="CreateContainer within sandbox \"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:12:11.331954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315994021.mount: Deactivated successfully. Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.225 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4d37b03d-0dc2-4253-bf4e-944367e78e4b", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b", Pod:"coredns-668d6bf9bc-z8qzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29989d2117b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.225 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.226 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" iface="eth0" netns="" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.226 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.226 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.299 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.300 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.300 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.316 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.317 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.318 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:11.337976 containerd[1691]: 2025-09-13 00:12:11.334 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.338848 containerd[1691]: time="2025-09-13T00:12:11.338152983Z" level=info msg="TearDown network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" successfully" Sep 13 00:12:11.338848 containerd[1691]: time="2025-09-13T00:12:11.338189183Z" level=info msg="StopPodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" returns successfully" Sep 13 00:12:11.342170 containerd[1691]: time="2025-09-13T00:12:11.340347719Z" level=info msg="CreateContainer within sandbox \"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"994b79ab632f50ceef9c3ebafe6c9d9ea61a050b73b7c71d0a40fa3fa9c8ccd2\"" Sep 13 00:12:11.342783 containerd[1691]: time="2025-09-13T00:12:11.342582155Z" level=info msg="StartContainer for \"994b79ab632f50ceef9c3ebafe6c9d9ea61a050b73b7c71d0a40fa3fa9c8ccd2\"" Sep 13 00:12:11.344339 containerd[1691]: time="2025-09-13T00:12:11.344065479Z" level=info msg="RemovePodSandbox for \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\"" Sep 13 00:12:11.344769 containerd[1691]: time="2025-09-13T00:12:11.344483086Z" level=info msg="Forcibly stopping sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\"" Sep 13 00:12:11.440247 systemd[1]: Started cri-containerd-994b79ab632f50ceef9c3ebafe6c9d9ea61a050b73b7c71d0a40fa3fa9c8ccd2.scope - libcontainer container 994b79ab632f50ceef9c3ebafe6c9d9ea61a050b73b7c71d0a40fa3fa9c8ccd2. Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.502 [WARNING][5703] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4d37b03d-0dc2-4253-bf4e-944367e78e4b", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"6f6ab35e2e32f8b4821b2b9e5762ae6fe541521fca88e07b7b30015e92f5384b", Pod:"coredns-668d6bf9bc-z8qzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29989d2117b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.503 [INFO][5703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.503 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" iface="eth0" netns="" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.503 [INFO][5703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.503 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.543 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.543 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.543 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.555 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.555 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" HandleID="k8s-pod-network.a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--z8qzz-eth0" Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.557 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:11.561892 containerd[1691]: 2025-09-13 00:12:11.559 [INFO][5703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1" Sep 13 00:12:11.565738 containerd[1691]: time="2025-09-13T00:12:11.562969058Z" level=info msg="TearDown network for sandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" successfully" Sep 13 00:12:11.572313 containerd[1691]: time="2025-09-13T00:12:11.572266110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:11.572607 containerd[1691]: time="2025-09-13T00:12:11.572576815Z" level=info msg="RemovePodSandbox \"a24e5b9ce4257fede1274185712562aefd5fc76f9fef091e70e69e9b4ec08cf1\" returns successfully" Sep 13 00:12:11.573327 containerd[1691]: time="2025-09-13T00:12:11.573295527Z" level=info msg="StopPodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\"" Sep 13 00:12:11.641982 containerd[1691]: time="2025-09-13T00:12:11.641932149Z" level=info msg="StartContainer for \"994b79ab632f50ceef9c3ebafe6c9d9ea61a050b73b7c71d0a40fa3fa9c8ccd2\" returns successfully" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.664 [WARNING][5745] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7857ee55-1894-4eba-8cff-954726351357", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f", Pod:"coredns-668d6bf9bc-dhnxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie23d15e2843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.664 [INFO][5745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.664 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" iface="eth0" netns="" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.664 [INFO][5745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.664 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.711 [INFO][5762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.711 [INFO][5762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.711 [INFO][5762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.718 [WARNING][5762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.718 [INFO][5762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.720 [INFO][5762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:11.725984 containerd[1691]: 2025-09-13 00:12:11.722 [INFO][5745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.725984 containerd[1691]: time="2025-09-13T00:12:11.724881605Z" level=info msg="TearDown network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" successfully" Sep 13 00:12:11.725984 containerd[1691]: time="2025-09-13T00:12:11.724913406Z" level=info msg="StopPodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" returns successfully" Sep 13 00:12:11.725984 containerd[1691]: time="2025-09-13T00:12:11.725610717Z" level=info msg="RemovePodSandbox for \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\"" Sep 13 00:12:11.725984 containerd[1691]: time="2025-09-13T00:12:11.725646718Z" level=info msg="Forcibly stopping sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\"" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.780 [WARNING][5776] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7857ee55-1894-4eba-8cff-954726351357", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"48a1002407ddcd0cc4226ad0635aa3df90d2889686ac9a336da8dbc54f6bd80f", Pod:"coredns-668d6bf9bc-dhnxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie23d15e2843", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.780 [INFO][5776] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.780 [INFO][5776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" iface="eth0" netns="" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.781 [INFO][5776] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.781 [INFO][5776] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.814 [INFO][5787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.814 [INFO][5787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.814 [INFO][5787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.821 [WARNING][5787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.822 [INFO][5787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" HandleID="k8s-pod-network.35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-coredns--668d6bf9bc--dhnxs-eth0" Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.823 [INFO][5787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:11.828912 containerd[1691]: 2025-09-13 00:12:11.825 [INFO][5776] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1" Sep 13 00:12:11.828912 containerd[1691]: time="2025-09-13T00:12:11.828430798Z" level=info msg="TearDown network for sandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" successfully" Sep 13 00:12:11.841035 containerd[1691]: time="2025-09-13T00:12:11.838835268Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:11.841035 containerd[1691]: time="2025-09-13T00:12:11.839060772Z" level=info msg="RemovePodSandbox \"35d96eafbd8c4ef3f7507d4c00927654c482b6bd69e81313da4d6d1a140739d1\" returns successfully" Sep 13 00:12:11.841035 containerd[1691]: time="2025-09-13T00:12:11.840280192Z" level=info msg="StopPodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\"" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.894 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a04c4d5-b360-4717-b663-a87e97f493a4", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61", Pod:"calico-apiserver-6cc8c97746-nwqsn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0c0cef1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.895 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.895 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" iface="eth0" netns="" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.895 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.895 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.931 [INFO][5812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.931 [INFO][5812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.931 [INFO][5812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.940 [WARNING][5812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.940 [INFO][5812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.943 [INFO][5812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:11.947285 containerd[1691]: 2025-09-13 00:12:11.945 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:11.949201 containerd[1691]: time="2025-09-13T00:12:11.948005553Z" level=info msg="TearDown network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" successfully" Sep 13 00:12:11.949201 containerd[1691]: time="2025-09-13T00:12:11.948065254Z" level=info msg="StopPodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" returns successfully" Sep 13 00:12:11.949201 containerd[1691]: time="2025-09-13T00:12:11.948749665Z" level=info msg="RemovePodSandbox for \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\"" Sep 13 00:12:11.949201 containerd[1691]: time="2025-09-13T00:12:11.948790465Z" level=info msg="Forcibly stopping sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\"" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.000 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a04c4d5-b360-4717-b663-a87e97f493a4", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"3d637cb034a698ae4ec117a31b7371ff30b21a66bf80912b767af4b159310d61", Pod:"calico-apiserver-6cc8c97746-nwqsn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied0c0cef1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.001 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.001 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" iface="eth0" netns="" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.001 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.001 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.034 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.034 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.034 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.042 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.042 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" HandleID="k8s-pod-network.4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--nwqsn-eth0" Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.044 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.049184 containerd[1691]: 2025-09-13 00:12:12.046 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877" Sep 13 00:12:12.049184 containerd[1691]: time="2025-09-13T00:12:12.048953003Z" level=info msg="TearDown network for sandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" successfully" Sep 13 00:12:12.062897 containerd[1691]: time="2025-09-13T00:12:12.062685627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:12.062897 containerd[1691]: time="2025-09-13T00:12:12.062777829Z" level=info msg="RemovePodSandbox \"4096edf1299389f48fea011628ba38323cf444fd98f57c9fd045119f7600f877\" returns successfully" Sep 13 00:12:12.063364 containerd[1691]: time="2025-09-13T00:12:12.063331938Z" level=info msg="StopPodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\"" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.120 [WARNING][5848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9", Pod:"calico-apiserver-6cc8c97746-j89mh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic601c119e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.120 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.120 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" iface="eth0" netns="" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.120 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.120 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.155 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.155 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.156 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.163 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.163 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.166 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.170180 containerd[1691]: 2025-09-13 00:12:12.168 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.170180 containerd[1691]: time="2025-09-13T00:12:12.169987082Z" level=info msg="TearDown network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" successfully" Sep 13 00:12:12.170180 containerd[1691]: time="2025-09-13T00:12:12.170033282Z" level=info msg="StopPodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" returns successfully" Sep 13 00:12:12.174151 containerd[1691]: time="2025-09-13T00:12:12.170780195Z" level=info msg="RemovePodSandbox for \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\"" Sep 13 00:12:12.174151 containerd[1691]: time="2025-09-13T00:12:12.170816395Z" level=info msg="Forcibly stopping sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\"" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.232 [WARNING][5870] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0", GenerateName:"calico-apiserver-6cc8c97746-", Namespace:"calico-apiserver", SelfLink:"", UID:"52a475e1-f54f-4d56-a7c8-53b7e0ab2cd3", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cc8c97746", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"497b16aafb693554c7810de635dc4a554c2a1653359e9dce1cf0012b3925bde9", Pod:"calico-apiserver-6cc8c97746-j89mh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic601c119e29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.232 [INFO][5870] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.232 [INFO][5870] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" iface="eth0" netns="" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.232 [INFO][5870] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.232 [INFO][5870] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.268 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.269 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.269 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.277 [WARNING][5877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.277 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" HandleID="k8s-pod-network.60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Workload="ci--4081.3.5--n--e49e858a9f-k8s-calico--apiserver--6cc8c97746--j89mh-eth0" Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.279 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.285285 containerd[1691]: 2025-09-13 00:12:12.281 [INFO][5870] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e" Sep 13 00:12:12.285991 containerd[1691]: time="2025-09-13T00:12:12.285305867Z" level=info msg="TearDown network for sandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" successfully" Sep 13 00:12:12.299145 containerd[1691]: time="2025-09-13T00:12:12.298809288Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:12.299145 containerd[1691]: time="2025-09-13T00:12:12.298900089Z" level=info msg="RemovePodSandbox \"60e4b342d0e92f37549b0b92121633e31173045e00897188aab4c1b1e29f298e\" returns successfully" Sep 13 00:12:12.299666 containerd[1691]: time="2025-09-13T00:12:12.299608301Z" level=info msg="StopPodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\"" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.356 [WARNING][5892] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ba7fa9d7-76c3-4c04-804c-9f129daebad5", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2", Pod:"goldmane-54d579b49d-dsfk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ab473c141d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.356 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.357 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" iface="eth0" netns="" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.357 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.357 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.395 [INFO][5899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.395 [INFO][5899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.395 [INFO][5899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.402 [WARNING][5899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.403 [INFO][5899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.407 [INFO][5899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.411627 containerd[1691]: 2025-09-13 00:12:12.409 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.412629 containerd[1691]: time="2025-09-13T00:12:12.411665333Z" level=info msg="TearDown network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" successfully" Sep 13 00:12:12.412629 containerd[1691]: time="2025-09-13T00:12:12.411694233Z" level=info msg="StopPodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" returns successfully" Sep 13 00:12:12.412924 containerd[1691]: time="2025-09-13T00:12:12.412893953Z" level=info msg="RemovePodSandbox for \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\"" Sep 13 00:12:12.412994 containerd[1691]: time="2025-09-13T00:12:12.412928053Z" level=info msg="Forcibly stopping sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\"" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.468 [WARNING][5913] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ba7fa9d7-76c3-4c04-804c-9f129daebad5", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2", Pod:"goldmane-54d579b49d-dsfk4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ab473c141d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.470 [INFO][5913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.470 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" iface="eth0" netns="" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.470 [INFO][5913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.470 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.520 [INFO][5921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.521 [INFO][5921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.521 [INFO][5921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.529 [WARNING][5921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.529 [INFO][5921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" HandleID="k8s-pod-network.02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Workload="ci--4081.3.5--n--e49e858a9f-k8s-goldmane--54d579b49d--dsfk4-eth0" Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.531 [INFO][5921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.537494 containerd[1691]: 2025-09-13 00:12:12.533 [INFO][5913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe" Sep 13 00:12:12.539922 containerd[1691]: time="2025-09-13T00:12:12.538879712Z" level=info msg="TearDown network for sandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" successfully" Sep 13 00:12:12.546734 containerd[1691]: time="2025-09-13T00:12:12.546693240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:12.546994 containerd[1691]: time="2025-09-13T00:12:12.546974945Z" level=info msg="RemovePodSandbox \"02d4c050599aa94bbb283c073f24b18b82e9315bd80384f59429728bb465b8fe\" returns successfully" Sep 13 00:12:12.547543 containerd[1691]: time="2025-09-13T00:12:12.547506353Z" level=info msg="StopPodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\"" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.652 [WARNING][5936] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.652 [INFO][5936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.652 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" iface="eth0" netns="" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.652 [INFO][5936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.652 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.700 [INFO][5943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.701 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.701 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.712 [WARNING][5943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.712 [INFO][5943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.717 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.724507 containerd[1691]: 2025-09-13 00:12:12.722 [INFO][5936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.726182 containerd[1691]: time="2025-09-13T00:12:12.725390561Z" level=info msg="TearDown network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" successfully" Sep 13 00:12:12.726182 containerd[1691]: time="2025-09-13T00:12:12.725430062Z" level=info msg="StopPodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" returns successfully" Sep 13 00:12:12.730450 containerd[1691]: time="2025-09-13T00:12:12.730410743Z" level=info msg="RemovePodSandbox for \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\"" Sep 13 00:12:12.730551 containerd[1691]: time="2025-09-13T00:12:12.730458444Z" level=info msg="Forcibly stopping sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\"" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.834 [WARNING][5958] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" WorkloadEndpoint="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.834 [INFO][5958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.834 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" iface="eth0" netns="" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.835 [INFO][5958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.835 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.895 [INFO][5965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.895 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.896 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.911 [WARNING][5965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.911 [INFO][5965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" HandleID="k8s-pod-network.196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Workload="ci--4081.3.5--n--e49e858a9f-k8s-whisker--69c95d7488--drvxq-eth0" Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.913 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:12.921218 containerd[1691]: 2025-09-13 00:12:12.916 [INFO][5958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1" Sep 13 00:12:12.921218 containerd[1691]: time="2025-09-13T00:12:12.920498451Z" level=info msg="TearDown network for sandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" successfully" Sep 13 00:12:12.973237 containerd[1691]: time="2025-09-13T00:12:12.972714105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:12.973237 containerd[1691]: time="2025-09-13T00:12:12.972809606Z" level=info msg="RemovePodSandbox \"196a1ac6dbf7b49e2a68b0b0caf89ad9a19c4a795989c1c4ec363101b14727f1\" returns successfully" Sep 13 00:12:12.973662 containerd[1691]: time="2025-09-13T00:12:12.973636420Z" level=info msg="StopPodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\"" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.023 [WARNING][5979] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cafccf29-04c8-4022-9a33-4b449e2cbfbb", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357", Pod:"csi-node-driver-x62xh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13f88cfccee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.023 [INFO][5979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.023 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" iface="eth0" netns="" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.023 [INFO][5979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.023 [INFO][5979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.044 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.044 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.044 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.050 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.051 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.052 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:13.055029 containerd[1691]: 2025-09-13 00:12:13.053 [INFO][5979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.055943 containerd[1691]: time="2025-09-13T00:12:13.055088751Z" level=info msg="TearDown network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" successfully" Sep 13 00:12:13.055943 containerd[1691]: time="2025-09-13T00:12:13.055120752Z" level=info msg="StopPodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" returns successfully" Sep 13 00:12:13.055943 containerd[1691]: time="2025-09-13T00:12:13.055892664Z" level=info msg="RemovePodSandbox for \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\"" Sep 13 00:12:13.055943 containerd[1691]: time="2025-09-13T00:12:13.055928565Z" level=info msg="Forcibly stopping sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\"" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.091 [WARNING][6000] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cafccf29-04c8-4022-9a33-4b449e2cbfbb", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-e49e858a9f", ContainerID:"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357", Pod:"csi-node-driver-x62xh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13f88cfccee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.092 [INFO][6000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.092 [INFO][6000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" iface="eth0" netns="" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.092 [INFO][6000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.092 [INFO][6000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.112 [INFO][6007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.113 [INFO][6007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.113 [INFO][6007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.120 [WARNING][6007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.120 [INFO][6007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" HandleID="k8s-pod-network.47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Workload="ci--4081.3.5--n--e49e858a9f-k8s-csi--node--driver--x62xh-eth0" Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.121 [INFO][6007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:12:13.123879 containerd[1691]: 2025-09-13 00:12:13.122 [INFO][6000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73" Sep 13 00:12:13.124668 containerd[1691]: time="2025-09-13T00:12:13.123950077Z" level=info msg="TearDown network for sandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" successfully" Sep 13 00:12:13.404479 containerd[1691]: time="2025-09-13T00:12:13.404425562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:12:13.404855 containerd[1691]: time="2025-09-13T00:12:13.404507464Z" level=info msg="RemovePodSandbox \"47d0e1df2e32055175113cb44b6656862250ed151ac522df0be45c1854c55e73\" returns successfully" Sep 13 00:12:14.591436 containerd[1691]: time="2025-09-13T00:12:14.591377966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:14.599748 containerd[1691]: time="2025-09-13T00:12:14.599662802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:12:14.605155 containerd[1691]: time="2025-09-13T00:12:14.605085791Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:14.613439 containerd[1691]: time="2025-09-13T00:12:14.613319225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:14.614880 containerd[1691]: time="2025-09-13T00:12:14.614140639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.350670276s" Sep 13 00:12:14.614880 containerd[1691]: time="2025-09-13T00:12:14.614182739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:12:14.615318 containerd[1691]: time="2025-09-13T00:12:14.615298358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:12:14.622820 containerd[1691]: time="2025-09-13T00:12:14.622782080Z" level=info msg="CreateContainer within sandbox \"e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:12:14.656287 containerd[1691]: time="2025-09-13T00:12:14.656241627Z" level=info msg="CreateContainer within sandbox \"e5aea2c012ff8ae28fecc4b64835af6ff8f40754cb8ffe1d94d57578c417271e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b\"" Sep 13 00:12:14.658390 containerd[1691]: time="2025-09-13T00:12:14.657028940Z" level=info msg="StartContainer for \"15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b\"" Sep 13 00:12:14.701365 systemd[1]: Started cri-containerd-15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b.scope - libcontainer container 15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b. Sep 13 00:12:14.750426 containerd[1691]: time="2025-09-13T00:12:14.750371266Z" level=info msg="StartContainer for \"15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b\" returns successfully" Sep 13 00:12:15.027041 kubelet[3144]: I0913 00:12:15.026737 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dc4fdd754-sspxr" podStartSLOduration=31.023123026 podStartE2EDuration="38.026649482s" podCreationTimestamp="2025-09-13 00:11:37 +0000 UTC" firstStartedPulling="2025-09-13 00:12:07.611608499 +0000 UTC m=+57.170598756" lastFinishedPulling="2025-09-13 00:12:14.615134855 +0000 UTC m=+64.174125212" observedRunningTime="2025-09-13 00:12:15.025716067 +0000 UTC m=+64.584706424" watchObservedRunningTime="2025-09-13 00:12:15.026649482 +0000 UTC m=+64.585639739" Sep 13 00:12:17.345424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925998823.mount: Deactivated successfully. Sep 13 00:12:21.866000 kubelet[3144]: I0913 00:12:21.865872 3144 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:22.916423 kubelet[3144]: I0913 00:12:22.916006 3144 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:12:26.051534 containerd[1691]: time="2025-09-13T00:12:26.051475764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:26.056559 containerd[1691]: time="2025-09-13T00:12:26.055523030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:12:26.059789 containerd[1691]: time="2025-09-13T00:12:26.059529395Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:26.074982 containerd[1691]: time="2025-09-13T00:12:26.074923948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:26.076821 containerd[1691]: time="2025-09-13T00:12:26.075933664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 11.460467504s" Sep 13 00:12:26.076821 containerd[1691]: time="2025-09-13T00:12:26.075980265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:12:26.077762 containerd[1691]: time="2025-09-13T00:12:26.077735294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:12:26.079096 containerd[1691]: time="2025-09-13T00:12:26.079066715Z" level=info msg="CreateContainer within sandbox \"9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:12:26.423500 containerd[1691]: time="2025-09-13T00:12:26.423357253Z" level=info msg="CreateContainer within sandbox \"9956e8686073704f5791b5d74173cb694abe697b55ebc50a04190f6aff66d6b2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4\"" Sep 13 00:12:26.424461 containerd[1691]: time="2025-09-13T00:12:26.424345369Z" level=info msg="StartContainer for \"d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4\"" Sep 13 00:12:26.466490 systemd[1]: run-containerd-runc-k8s.io-d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4-runc.dR4nom.mount: Deactivated successfully. Sep 13 00:12:26.476157 systemd[1]: Started cri-containerd-d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4.scope - libcontainer container d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4. Sep 13 00:12:26.584485 containerd[1691]: time="2025-09-13T00:12:26.584439691Z" level=info msg="StartContainer for \"d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4\" returns successfully" Sep 13 00:12:28.069903 systemd[1]: run-containerd-runc-k8s.io-d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4-runc.BAy3Iy.mount: Deactivated successfully. Sep 13 00:12:28.519228 containerd[1691]: time="2025-09-13T00:12:28.519157770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:28.522281 containerd[1691]: time="2025-09-13T00:12:28.522006917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:12:28.568502 containerd[1691]: time="2025-09-13T00:12:28.568377976Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:28.632977 containerd[1691]: time="2025-09-13T00:12:28.632016118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:12:28.632977 containerd[1691]: time="2025-09-13T00:12:28.632795231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.554142423s" Sep 13 00:12:28.632977 containerd[1691]: time="2025-09-13T00:12:28.632838332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:12:28.636090 containerd[1691]: time="2025-09-13T00:12:28.636048884Z" level=info msg="CreateContainer within sandbox \"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:12:29.031167 containerd[1691]: time="2025-09-13T00:12:29.031119453Z" level=info msg="CreateContainer within sandbox \"a5281a65041b233fd279cbf79a6b9e7cbe3f624133c748c47c90da3724f9d357\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e88e38034edbfbe73f26830e34d2fe55f85d25a73d88ae6951311839d8d7fd09\"" Sep 13 00:12:29.032661 containerd[1691]: time="2025-09-13T00:12:29.031829965Z" level=info msg="StartContainer for \"e88e38034edbfbe73f26830e34d2fe55f85d25a73d88ae6951311839d8d7fd09\"" Sep 13 00:12:29.073188 systemd[1]: Started cri-containerd-e88e38034edbfbe73f26830e34d2fe55f85d25a73d88ae6951311839d8d7fd09.scope - libcontainer container e88e38034edbfbe73f26830e34d2fe55f85d25a73d88ae6951311839d8d7fd09. Sep 13 00:12:29.118437 containerd[1691]: time="2025-09-13T00:12:29.118396982Z" level=info msg="StartContainer for \"e88e38034edbfbe73f26830e34d2fe55f85d25a73d88ae6951311839d8d7fd09\" returns successfully" Sep 13 00:12:29.705640 kubelet[3144]: I0913 00:12:29.705605 3144 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:12:29.705640 kubelet[3144]: I0913 00:12:29.705641 3144 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:12:29.851902 systemd[1]: run-containerd-runc-k8s.io-a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54-runc.4OdS4I.mount: Deactivated successfully. Sep 13 00:12:29.939110 kubelet[3144]: I0913 00:12:29.939014 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-dsfk4" podStartSLOduration=35.746781768 podStartE2EDuration="52.938200606s" podCreationTimestamp="2025-09-13 00:11:37 +0000 UTC" firstStartedPulling="2025-09-13 00:12:08.885577244 +0000 UTC m=+58.444567601" lastFinishedPulling="2025-09-13 00:12:26.076996082 +0000 UTC m=+75.635986439" observedRunningTime="2025-09-13 00:12:27.07722206 +0000 UTC m=+76.636212417" watchObservedRunningTime="2025-09-13 00:12:29.938200606 +0000 UTC m=+79.497190863" Sep 13 00:12:30.672689 systemd[1]: run-containerd-runc-k8s.io-15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b-runc.wnqEmf.mount: Deactivated successfully. Sep 13 00:12:45.057339 systemd[1]: run-containerd-runc-k8s.io-15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b-runc.masH5g.mount: Deactivated successfully. Sep 13 00:12:58.082889 systemd[1]: run-containerd-runc-k8s.io-d9fdc997f5034d251b19d7e7dc9932046de3c545508bd6f027b5c190e34044c4-runc.8ZcN6O.mount: Deactivated successfully. Sep 13 00:12:58.199779 kubelet[3144]: I0913 00:12:58.199702 3144 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x62xh" podStartSLOduration=60.063028555 podStartE2EDuration="1m21.199679567s" podCreationTimestamp="2025-09-13 00:11:37 +0000 UTC" firstStartedPulling="2025-09-13 00:12:07.497312138 +0000 UTC m=+57.056302495" lastFinishedPulling="2025-09-13 00:12:28.63396315 +0000 UTC m=+78.192953507" observedRunningTime="2025-09-13 00:12:30.090107093 +0000 UTC m=+79.649097450" watchObservedRunningTime="2025-09-13 00:12:58.199679567 +0000 UTC m=+107.758669925" Sep 13 00:12:59.863088 systemd[1]: run-containerd-runc-k8s.io-a9e378ee0aa8253864a2cd02533c9446bbfd6f4970cd8997573b4f4c1cea9e54-runc.P269dK.mount: Deactivated successfully. Sep 13 00:13:15.036530 systemd[1]: run-containerd-runc-k8s.io-15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b-runc.qtqWVE.mount: Deactivated successfully. Sep 13 00:13:37.757639 systemd[1]: Started sshd@7-10.200.8.10:22-10.200.16.10:50668.service - OpenSSH per-connection server daemon (10.200.16.10:50668). Sep 13 00:13:38.391479 sshd[6494]: Accepted publickey for core from 10.200.16.10 port 50668 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:13:38.393298 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:38.398642 systemd-logind[1677]: New session 10 of user core. Sep 13 00:13:38.403211 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:13:38.905088 sshd[6494]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:38.909610 systemd[1]: sshd@7-10.200.8.10:22-10.200.16.10:50668.service: Deactivated successfully. Sep 13 00:13:38.912529 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:13:38.913468 systemd-logind[1677]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:13:38.914756 systemd-logind[1677]: Removed session 10. Sep 13 00:13:44.025718 systemd[1]: Started sshd@8-10.200.8.10:22-10.200.16.10:46904.service - OpenSSH per-connection server daemon (10.200.16.10:46904). Sep 13 00:13:44.673171 sshd[6529]: Accepted publickey for core from 10.200.16.10 port 46904 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:13:44.674737 sshd[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:44.679708 systemd-logind[1677]: New session 11 of user core. Sep 13 00:13:44.683197 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:13:45.229577 sshd[6529]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:45.232734 systemd[1]: sshd@8-10.200.8.10:22-10.200.16.10:46904.service: Deactivated successfully. Sep 13 00:13:45.235364 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:13:45.236990 systemd-logind[1677]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:13:45.238353 systemd-logind[1677]: Removed session 11. Sep 13 00:13:50.347354 systemd[1]: Started sshd@9-10.200.8.10:22-10.200.16.10:42090.service - OpenSSH per-connection server daemon (10.200.16.10:42090). Sep 13 00:13:50.967011 sshd[6562]: Accepted publickey for core from 10.200.16.10 port 42090 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:13:50.968650 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:50.974134 systemd-logind[1677]: New session 12 of user core. Sep 13 00:13:50.978200 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:13:51.490151 sshd[6562]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:51.493850 systemd[1]: sshd@9-10.200.8.10:22-10.200.16.10:42090.service: Deactivated successfully. Sep 13 00:13:51.496576 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:13:51.498216 systemd-logind[1677]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:13:51.499759 systemd-logind[1677]: Removed session 12. Sep 13 00:13:51.607784 systemd[1]: Started sshd@10-10.200.8.10:22-10.200.16.10:42098.service - OpenSSH per-connection server daemon (10.200.16.10:42098). Sep 13 00:13:52.230549 sshd[6576]: Accepted publickey for core from 10.200.16.10 port 42098 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:13:52.232136 sshd[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:52.236931 systemd-logind[1677]: New session 13 of user core. Sep 13 00:13:52.243218 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:13:52.777390 sshd[6576]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:52.782325 systemd-logind[1677]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:13:52.782995 systemd[1]: sshd@10-10.200.8.10:22-10.200.16.10:42098.service: Deactivated successfully. Sep 13 00:13:52.785447 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:13:52.786861 systemd-logind[1677]: Removed session 13. Sep 13 00:13:52.894399 systemd[1]: Started sshd@11-10.200.8.10:22-10.200.16.10:42110.service - OpenSSH per-connection server daemon (10.200.16.10:42110). Sep 13 00:13:53.516599 sshd[6587]: Accepted publickey for core from 10.200.16.10 port 42110 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:13:53.518163 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:53.523261 systemd-logind[1677]: New session 14 of user core. Sep 13 00:13:53.528356 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:13:54.019910 sshd[6587]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:54.024592 systemd[1]: sshd@11-10.200.8.10:22-10.200.16.10:42110.service: Deactivated successfully. Sep 13 00:13:54.027129 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:13:54.027931 systemd-logind[1677]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:13:54.028938 systemd-logind[1677]: Removed session 14. Sep 13 00:13:59.131725 systemd[1]: Started sshd@12-10.200.8.10:22-10.200.16.10:42126.service - OpenSSH per-connection server daemon (10.200.16.10:42126). Sep 13 00:13:59.761062 sshd[6624]: Accepted publickey for core from 10.200.16.10 port 42126 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:13:59.762648 sshd[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:59.767652 systemd-logind[1677]: New session 15 of user core. Sep 13 00:13:59.772220 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:14:00.276288 sshd[6624]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:00.281195 systemd[1]: sshd@12-10.200.8.10:22-10.200.16.10:42126.service: Deactivated successfully. Sep 13 00:14:00.289006 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:14:00.292675 systemd-logind[1677]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:14:00.294742 systemd-logind[1677]: Removed session 15. Sep 13 00:14:05.410363 systemd[1]: Started sshd@13-10.200.8.10:22-10.200.16.10:36444.service - OpenSSH per-connection server daemon (10.200.16.10:36444). Sep 13 00:14:06.030810 sshd[6660]: Accepted publickey for core from 10.200.16.10 port 36444 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:06.032598 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:06.037147 systemd-logind[1677]: New session 16 of user core. Sep 13 00:14:06.042419 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:14:06.544093 sshd[6660]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:06.548624 systemd-logind[1677]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:14:06.549535 systemd[1]: sshd@13-10.200.8.10:22-10.200.16.10:36444.service: Deactivated successfully. Sep 13 00:14:06.552327 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:14:06.553415 systemd-logind[1677]: Removed session 16. Sep 13 00:14:11.659405 systemd[1]: Started sshd@14-10.200.8.10:22-10.200.16.10:39496.service - OpenSSH per-connection server daemon (10.200.16.10:39496). Sep 13 00:14:12.285654 sshd[6675]: Accepted publickey for core from 10.200.16.10 port 39496 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:12.287036 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:12.299878 systemd-logind[1677]: New session 17 of user core. Sep 13 00:14:12.305227 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:14:12.842793 sshd[6675]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:12.846475 systemd-logind[1677]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:14:12.848478 systemd[1]: sshd@14-10.200.8.10:22-10.200.16.10:39496.service: Deactivated successfully. Sep 13 00:14:12.853827 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:14:12.859554 systemd-logind[1677]: Removed session 17. Sep 13 00:14:12.962993 systemd[1]: Started sshd@15-10.200.8.10:22-10.200.16.10:39502.service - OpenSSH per-connection server daemon (10.200.16.10:39502). Sep 13 00:14:13.603710 sshd[6688]: Accepted publickey for core from 10.200.16.10 port 39502 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:13.605570 sshd[6688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:13.612271 systemd-logind[1677]: New session 18 of user core. Sep 13 00:14:13.619245 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:14:14.446929 sshd[6688]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:14.453141 systemd-logind[1677]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:14:14.454197 systemd[1]: sshd@15-10.200.8.10:22-10.200.16.10:39502.service: Deactivated successfully. Sep 13 00:14:14.458927 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:14:14.462594 systemd-logind[1677]: Removed session 18. Sep 13 00:14:14.567347 systemd[1]: Started sshd@16-10.200.8.10:22-10.200.16.10:39510.service - OpenSSH per-connection server daemon (10.200.16.10:39510). Sep 13 00:14:15.031227 systemd[1]: run-containerd-runc-k8s.io-15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b-runc.c7svi1.mount: Deactivated successfully. Sep 13 00:14:15.209222 sshd[6699]: Accepted publickey for core from 10.200.16.10 port 39510 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:15.210922 sshd[6699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:15.216191 systemd-logind[1677]: New session 19 of user core. Sep 13 00:14:15.227214 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:14:16.238858 sshd[6699]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:16.243949 systemd[1]: sshd@16-10.200.8.10:22-10.200.16.10:39510.service: Deactivated successfully. Sep 13 00:14:16.246848 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:14:16.247955 systemd-logind[1677]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:14:16.249367 systemd-logind[1677]: Removed session 19. Sep 13 00:14:16.351389 systemd[1]: Started sshd@17-10.200.8.10:22-10.200.16.10:39520.service - OpenSSH per-connection server daemon (10.200.16.10:39520). Sep 13 00:14:16.979438 sshd[6737]: Accepted publickey for core from 10.200.16.10 port 39520 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:16.981166 sshd[6737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:16.985961 systemd-logind[1677]: New session 20 of user core. Sep 13 00:14:16.990191 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:14:17.592231 sshd[6737]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:17.597195 systemd[1]: sshd@17-10.200.8.10:22-10.200.16.10:39520.service: Deactivated successfully. Sep 13 00:14:17.600988 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:14:17.603252 systemd-logind[1677]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:14:17.604482 systemd-logind[1677]: Removed session 20. Sep 13 00:14:17.714419 systemd[1]: Started sshd@18-10.200.8.10:22-10.200.16.10:39532.service - OpenSSH per-connection server daemon (10.200.16.10:39532). Sep 13 00:14:18.334395 sshd[6748]: Accepted publickey for core from 10.200.16.10 port 39532 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:18.336792 sshd[6748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:18.341726 systemd-logind[1677]: New session 21 of user core. Sep 13 00:14:18.346173 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:14:18.833987 sshd[6748]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:18.837316 systemd[1]: sshd@18-10.200.8.10:22-10.200.16.10:39532.service: Deactivated successfully. Sep 13 00:14:18.839980 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:14:18.841832 systemd-logind[1677]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:14:18.843569 systemd-logind[1677]: Removed session 21. Sep 13 00:14:23.953379 systemd[1]: Started sshd@19-10.200.8.10:22-10.200.16.10:34378.service - OpenSSH per-connection server daemon (10.200.16.10:34378). Sep 13 00:14:24.573839 sshd[6763]: Accepted publickey for core from 10.200.16.10 port 34378 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:24.576401 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:24.582924 systemd-logind[1677]: New session 22 of user core. Sep 13 00:14:24.586212 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:14:25.074118 sshd[6763]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:25.077865 systemd[1]: sshd@19-10.200.8.10:22-10.200.16.10:34378.service: Deactivated successfully. Sep 13 00:14:25.080989 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:14:25.082839 systemd-logind[1677]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:14:25.083836 systemd-logind[1677]: Removed session 22. Sep 13 00:14:30.198359 systemd[1]: Started sshd@20-10.200.8.10:22-10.200.16.10:40020.service - OpenSSH per-connection server daemon (10.200.16.10:40020). Sep 13 00:14:30.822882 sshd[6821]: Accepted publickey for core from 10.200.16.10 port 40020 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:30.825103 sshd[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:30.832564 systemd-logind[1677]: New session 23 of user core. Sep 13 00:14:30.841210 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:14:31.387996 sshd[6821]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:31.393399 systemd-logind[1677]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:14:31.394474 systemd[1]: sshd@20-10.200.8.10:22-10.200.16.10:40020.service: Deactivated successfully. Sep 13 00:14:31.398409 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:14:31.399966 systemd-logind[1677]: Removed session 23. Sep 13 00:14:36.504403 systemd[1]: Started sshd@21-10.200.8.10:22-10.200.16.10:40028.service - OpenSSH per-connection server daemon (10.200.16.10:40028). Sep 13 00:14:37.132068 sshd[6853]: Accepted publickey for core from 10.200.16.10 port 40028 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:37.133757 sshd[6853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:37.138766 systemd-logind[1677]: New session 24 of user core. Sep 13 00:14:37.143228 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:14:37.840228 sshd[6853]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:37.844537 systemd[1]: sshd@21-10.200.8.10:22-10.200.16.10:40028.service: Deactivated successfully. Sep 13 00:14:37.846907 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:14:37.847920 systemd-logind[1677]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:14:37.848958 systemd-logind[1677]: Removed session 24. Sep 13 00:14:42.954357 systemd[1]: Started sshd@22-10.200.8.10:22-10.200.16.10:42780.service - OpenSSH per-connection server daemon (10.200.16.10:42780). Sep 13 00:14:43.577809 sshd[6893]: Accepted publickey for core from 10.200.16.10 port 42780 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:43.580571 sshd[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:43.590370 systemd-logind[1677]: New session 25 of user core. Sep 13 00:14:43.595250 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:14:44.107194 sshd[6893]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:44.110936 systemd-logind[1677]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:14:44.112320 systemd[1]: sshd@22-10.200.8.10:22-10.200.16.10:42780.service: Deactivated successfully. Sep 13 00:14:44.116977 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:14:44.120142 systemd-logind[1677]: Removed session 25. Sep 13 00:14:45.050364 systemd[1]: run-containerd-runc-k8s.io-15f6f044802e7f17c2701247ca2cebaaabd149898ab9a112f7f07f4b560d2a6b-runc.h8XYlu.mount: Deactivated successfully. Sep 13 00:14:49.220355 systemd[1]: Started sshd@23-10.200.8.10:22-10.200.16.10:42796.service - OpenSSH per-connection server daemon (10.200.16.10:42796). Sep 13 00:14:49.839946 sshd[6926]: Accepted publickey for core from 10.200.16.10 port 42796 ssh2: RSA SHA256:Fsn+VjAXZsQtMQy71vnY/E0A3GZU2IYFBAaEm01QHO4 Sep 13 00:14:49.841539 sshd[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:49.852265 systemd-logind[1677]: New session 26 of user core. Sep 13 00:14:49.853250 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:14:50.342114 sshd[6926]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:50.346313 systemd-logind[1677]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:14:50.347190 systemd[1]: sshd@23-10.200.8.10:22-10.200.16.10:42796.service: Deactivated successfully. Sep 13 00:14:50.349401 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:14:50.350719 systemd-logind[1677]: Removed session 26.